I've actually done a lot of debugging on this and it has to do with very large headers in e-mails. I've sent to Jeff the full GDB debug output and from what I gathered myself, I can replicate the bug 100% of the time when I send a message with 100 header lines of x-locaweb-id: TsKSlv8kr1xtXZssGBT0eBXhIS196WfGDFke5GWcxnU4ROIsDI7IscNnMCqL97PagK83P4Gur5HyO4RetEQfXZHcp-X896R9RSMMqhZiCzYmYSeU8G-klhrylZs87ASbcLqUPz-fSWo5qS1iJn1_j-qM48cFFzTxEb9DhO1orFZVCKpjIX2c87xO3C3j1loEzMg_QSCkCcauet1YEtzur_yiCYeHYvb63419ODJV8c8=Njk2ZTY2NmY3MjZkNjE3NDY5NzY2ZjJlNmU2Zjc2NjE2ZDY1NzI2OTYzNjE0MDY3NmIyZTYzNmY2ZDJlNjI3Mg== I ran tcpdump on the SOLR connection and the data is transmitted fine, SOLR acknowledges it and I also see it indexed properly. There must be something in Dovecot that croaks at those long header lines. I believe the relevant line in the GDB output is the following, but I don't know how to interpret it #6 0x000000001143ba82 in i_panic (format=0x112c1d67 "file %s: line %d (%s): assertion failed: (%s)") at failures.c:523 ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, timestamp_usecs = 0, log_prefix = 0x0, log_prefix_type_pos = 0} args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7fffffffdfa0, reg_save_area = 0x7fffffffde90}} #7 0x000000001139913c in http_client_request_send_more (req=0x1260b848, pipelined=false) at http-client-request.c:1232 conn = 0x118e0c80 cctx = 0x11927048 output = 0x118e5be0 res = OSTREAM_SEND_ISTREAM_RESULT_FINISHED error = 0x123ed000 "80a\r\nSeU8G-klhrylZs87ASbcLqUPz-fSWo5qS1iJn1_j-qM48cFFzTxEb9DhO1orFZVCKpjIX2c87xO3C3j1loEzMg_QSCkCcauet1YEtzur_yiCYeHYvb63419ODJV8c8= Njk2ZTY2NmY3MjZkNjE3NDY5NzY2ZjJlNmU2Zjc2NjE2ZDY1NzI2OTYzNjE0MDY3NmI"... offset = 4294967374 On Wednesday, 02/09/2020 at 19:36 Stephan Bosch wrote: On 19/08/2020 17:37, Josef 'Jeff' Sipek wrote:
On Wed, Aug 19, 2020 at 17:03:57 +0200, Alessio Cecchi wrote:
Hi,
after the upgrade to Dovecot 2.3.11.3, from 2.3.10.1, I see frequently these errors from different users: It looks like this has been around for a while and you just got unlucky and started seeing this now. Here's a quick & dirty patch that should fix this. If you can try it, let us know how it went.
Jeff.
Well, maybe not. This patch is only useful when Solr and Tika are used at the same time. Otherwise, this is something else causing the same panic. Regards, Stephan.
diff --git a/src/plugins/fts-solr/solr-connection.c
b/src/plugins/fts-solr/solr-connection.c
index ae720b5e2870a852c1b6c440939e3c7c0fa72b5c..9d364f93e2cd1b716b9ab61bd39656a6c5b1ea04 100644 --- a/src/plugins/fts-solr/solr-connection.c +++ b/src/plugins/fts-solr/solr-connection.c @@ -103,7 +103,7 @@ int solr_connection_init(const struct fts_solr_settings *solr_set, http_set.ssl = ssl_client_set; http_set.debug = solr_set->debug; http_set.rawlog_dir = solr_set->rawlog_dir; -solr_http_client = http_client_init(&http_set); +solr_http_client = http_client_init_private(&http_set); }
*conn_r = conn; diff --git a/src/plugins/fts/fts-parser-tika.c b/src/plugins/fts/fts-parser-tika.c index a4b8b5c3034f57e22e77caa759c090da6b62f8ba..b8b57a350b9a710d101ac7ccbcc14560d415d905 100644 --- a/src/plugins/fts/fts-parser-tika.c +++ b/src/plugins/fts/fts-parser-tika.c @@ -77,7 +77,7 @@ tika_get_http_client_url(struct mail_user *user, struct http_url **http_url_r) http_set.request_timeout_msecs = 60*1000; http_set.ssl = &ssl_set; http_set.debug = user->mail_debug; -tika_http_client = http_client_init(&http_set); +tika_http_client = http_client_init_private(&http_set); } *http_url_r = tuser->http_url; return 0;