Panic on indexer-worker and Dovecot stop to search on Solr
Hi,
I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable "search in all folders" for users.
All works fine until for some users the indexer-worker process crash.
After this crash Dovecot stop to query Solr for new search in BODY, returning SERVERBUG, for all users on the server and only with a dovecot restart users can search in BODY again.
Before the dovecot restart I can see that on Solr server no query from dovecot arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works fine)
Dovecot version is 2.3.14 (from RPM) and Apache Solr si 7.7.3. Here is the error found in the log:
May 31 14:58:13 Panic: indexer-worker(user@email.com) session=<VA0oxZ/DXtG5Ydlx:0KzOIkLdtGDSawAAxbYlJw>: file http-client-request.c: line 1240 (http_client_request_send_more): assertion failed: (req->payload_input != NULL) May 31 14:58:13 Error: indexer-worker(user@email.com) session=<VA0oxZ/DXtG5Ydlx:0KzOIkLdtGDSawAAxbYlJw>: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f3d91b2cac2] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f3d91b2cbce] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf3cde) [0x7f3d91b39cde] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf3d81) [0x7f3d91b39d81] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f3d91a8b25a] -> /usr/lib64/dovecot/libdovecot.so.0(http_client_request_send_more+0x3dd) [0x7f3d91ad39ad] -> /usr/lib64/dovecot/libdovecot.so.0(http_client_connection_output+0xf1) [0x7f3d91ad8101] -> /usr/lib64/dovecot/libdovecot.so.0(+0x11d2b0) [0x7f3d91b632b0] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7f3d91b51f45] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) [0x7f3d91b538fb] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) [0x7f3d91b52049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f3d91b52288] -> /usr/lib64/dovecot/libdovecot.so.0(http_client_wait+0xcd) [0x7f3d91ae120d] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(solr_connection_select+0xe4) [0x7f3d8fca1174] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(+0x45d4) [0x7f3d8fc9d5d4] -> /usr/lib64/dovecot/lib20_fts_plugin.so(fts_backend_get_last_uid+0x6e) [0x7f3d90e103fe] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xf952) [0x7f3d90e16952] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x10ef6) [0x7f3d90e17ef6] -> /usr/lib64/dovecot/lib20_virtual_plugin.so(+0x966a) [0x7f3d90bfc66a] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x10ba6) [0x7f3d90e17ba6] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_precache+0x2e) [0x7f3d91e3cb0e] -> dovecot/indexer-worker user@email.com Virtual/All - 5300/20832 [0x5648464be924] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7f3d91b51f45] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) [0x7f3d91b538fb] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) [0x7f3d91b52049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f3d91b52288] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3d91abcbb3] -> dovecot/indexer-worker user@email.com Virtual/All - 5300/20832 [0x5648464be1f7] -> /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f3d9169a555] -> dovecot/indexer-worker user@email.com Virtual/All - 5300/20832 [0x5648464be2ba] May 31 14:58:13 Fatal: indexer-worker(user@email.com) session=<VA0oxZ/DXtG5Ydlx:0KzOIkLdtGDSawAAxbYlJw>: master: service(indexer-worker): child 27602 killed with signal 6 (core dumps disabled - https://dovecot.org/bugreport.html#coredumps)
on Solr side I found this error log:
org.apache.solr.common.SolrException: Early EOF at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:190) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2551) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:733) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:531) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395) [...]
and
[com.ctc.wstx.exc.WstxLazyException] com.ctc.wstx.exc.WstxIOException: Early EOF at com.ctc.wstx.exc.WstxLazyException.throwLazily(WstxLazyException.java:45) at com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:728) at com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.java:3678) at com.ctc.wstx.sr.BasicStreamReader.getText(BasicStreamReader.java:860) at org.apache.solr.handler.loader.XMLLoader.readDoc(XMLLoader.java:419) at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:260) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
This is the relevant part of dovecot configuration: plugin { fts = solr fts_autoindex = yes fts_decoder = decode2text fts_enforced = body fts_index_timeout = 5s fts_solr = url=http://10.0.1.3:8983/solr/dovecot/ [...] }
Is this an already know bug? Any workaround? Thanks
-- Alessio Cecchi Postmaster @ http://www.qboxmail.it https://www.linkedin.com/in/alessice
On 31/05/2021 19:09 Alessio Cecchi <alessio@skye.it> wrote:
Hi, I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable "search in all folders" for users.
All works fine until for some users the indexer-worker process crash. After this crash Dovecot stop to query Solr for new search in BODY, returning SERVERBUG, for all users on the server and only with a dovecot restart users can search in BODY again. Before the dovecot restart I can see that on Solr server no query from dovecot arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works fine)
Dovecot version is 2.3.14 (from RPM) and Apache Solr si 7.7.3. Here is the error found in the log:
<snip/>
Is this an already know bug? Any workaround? Thanks
-- Alessio Cecchi Postmaster @ http://www.qboxmail.it https://www.linkedin.com/in/alessice
Hi!
It's a known bug and will be fixed in next release. It happens with tika and solr being used together.
Aki
Il 31/05/21 18:17, Aki Tuomi ha scritto:
On 31/05/2021 19:09 Alessio Cecchi <alessio@skye.it> wrote:
Hi, I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable "search in all folders" for users.
All works fine until for some users the indexer-worker process crash. After this crash Dovecot stop to query Solr for new search in BODY, returning SERVERBUG, for all users on the server and only with a dovecot restart users can search in BODY again. Before the dovecot restart I can see that on Solr server no query from dovecot arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works fine)
Hi!
It's a known bug and will be fixed in next release. It happens with tika and solr being used together.
Aki Good to know, I'm not using tika but decode2text, is the same?
-- Alessio Cecchi Postmaster @ http://www.qboxmail.it https://www.linkedin.com/in/alessice
On 31/05/2021 21:58 Alessio Cecchi <alessio@skye.it> wrote:
Il 31/05/21 18:17, Aki Tuomi ha scritto:
On 31/05/2021 19:09 Alessio Cecchi <alessio@skye.it> wrote:
Hi, I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable "search in all folders" for users.
All works fine until for some users the indexer-worker process crash. After this crash Dovecot stop to query Solr for new search in BODY, returning SERVERBUG, for all users on the server and only with a dovecot restart users can search in BODY again. Before the dovecot restart I can see that on Solr server no query from dovecot arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works fine)
Hi!
It's a known bug and will be fixed in next release. It happens with tika and solr being used together.
Aki Good to know, I'm not using tika but decode2text, is the same?
Hm. Not necessarely. It could be something related to some other http connection though. The fix done should help in any case.
Aki
Il 31/05/21 21:00, Aki Tuomi ha scritto:
On 31/05/2021 21:58 Alessio Cecchi <alessio@skye.it> wrote:
Il 31/05/21 18:17, Aki Tuomi ha scritto:
On 31/05/2021 19:09 Alessio Cecchi <alessio@skye.it> wrote:
Hi, I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable "search in all folders" for users.
All works fine until for some users the indexer-worker process crash. After this crash Dovecot stop to query Solr for new search in BODY, returning SERVERBUG, for all users on the server and only with a dovecot restart users can search in BODY again. Before the dovecot restart I can see that on Solr server no query from dovecot arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works fine)
Hi!
It's a known bug and will be fixed in next release. It happens with tika and solr being used together.
Aki Good to know, I'm not using tika but decode2text, is the same?
Hm. Not necessarely. It could be something related to some other http connection though. The fix done should help in any case.
Aki
Hi,
I'm not sure if can help but I have collect the core dump also for this crash. I notice that happens only when indexing Virtual mailbox folder, example:
doveadm index -u user4@email.com '*' works fine
doveadm index -u user4@email.com 'Virtual/All' crash
If I run first the index on '*' and after on 'Virtual/All' works fine for both. But when users from webmail search in all folders via Virtual/All always crash and after no more indexer-worker run until I reload dovecot or kill "indexer" process.
Here the core dump:
Jun 03 05:05:34 Panic: indexer-worker(user4@email.com) session=<x0EZ2dPDiuW5Ydlx:pQBBH/tGuGDjBwAAxbYlJw>: file http-client-request.c: line 1240 (http_client_request_send_more): assertion failed: (req->payload_input != NULL) Jun 03 05:05:34 Error: indexer-worker(user4@email.com) session=<x0EZ2dPDiuW5Ydlx:pQBBH/tGuGDjBwAAxbYlJw>: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f5d132cbac2] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f5d132cbbce] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf3cde) [0x7f5d132d8cde] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf3d81) [0x7f5d132d8d81] -> /usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f5d1322a25a] -> /usr/lib64/dovecot/libdovecot.so.0(http_client_request_send_more+0x3dd) [0x7f5d132729ad] -> /usr/lib64/dovecot/libdovecot.so.0(http_client_connection_output+0xf1) [0x7f5d13277101] -> /usr/lib64/dovecot/libdovecot.so.0(+0x11d2b0) [0x7f5d133022b0] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7f5d132f0f45] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) [0x7f5d132f28fb] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) [0x7f5d132f1049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f5d132f1288] -> /usr/lib64/dovecot/libdovecot.so.0(http_client_wait+0xcd) [0x7f5d1328020d] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(solr_connection_select+0xe4) [0x7f5d11440174] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(+0x45d4) [0x7f5d1143c5d4] -> /usr/lib64/dovecot/lib20_fts_plugin.so(fts_backend_get_last_uid+0x6e) [0x7f5d125af3fe] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xf952) [0x7f5d125b5952] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x10ef6) [0x7f5d125b6ef6] -> /usr/lib64/dovecot/lib20_virtual_plugin.so(+0x966a) [0x7f5d1239b66a] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x10ba6) [0x7f5d125b6ba6] -> /usr/lib64/dovecot/libdovecot-storage.so.0(mail_precache+0x2e) [0x7f5d135dbb0e] -> dovecot/indexer-worker user4@email.com Virtual/All [0x561e504cb924] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7f5d132f0f45] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) [0x7f5d132f28fb] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) [0x7f5d132f1049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f5d132f1288] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f5d1325bbb3] -> dovecot/indexer-worker user4@email.com Virtual/All [0x561e504cb1f7] -> /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f5d12e39555] -> dovecot/indexer-worker user4@email.com Virtual/All [0x561e504cb2ba] Jun 03 05:05:34 Fatal: indexer-worker(user4@email.com) session=<x0EZ2dPDiuW5Ydlx:pQBBH/tGuGDjBwAAxbYlJw>: master: service(indexer-worker): child 2019 killed with signal 6 (core dumped)
[root@popimap ~]# gdb /usr/libexec/dovecot/indexer-worker /var/core/core.indexer-worker.2019 GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-120.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/libexec/dovecot/indexer-worker...Reading symbols from /usr/lib/debug/usr/libexec/dovecot/indexer-worker.debug...done. done. [New LWP 2019] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `dovecot/indexer-worker'. Program terminated with signal 6, Aborted. #0 0x00007f5d12e4d387 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 expat-2.1.0-12.el7.x86_64 glibc-2.17-324.el7_9.x86_64 libgcc-4.8.5-44.el7.x86_64 libzstd-1.4.9-1.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-19.el7_9.x86_64 (gdb) bt full #0 0x00007f5d12e4d387 in raise () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f5d12e4ea78 in abort () from /lib64/libc.so.6 No symbol table info available. #2 0x00007f5d132d8c97 in default_fatal_finish (status=0, type=<optimized out>) at failures.c:459 backtrace = 0x561e523ecab8 "/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f5d132cbac2] -> /usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f5d132cbbce] -> /usr/lib64/dovecot/libdovecot.so.0(+0xf3cde) [0"... recursed = 0 #3 fatal_handler_real (ctx=<optimized out>, format=<optimized out>, args=<optimized out>) at failures.c:471 status = 0 #4 0x00007f5d132d8d81 in i_internal_fatal_handler (ctx=<optimized out>, format=<optimized out>, args=<optimized out>) at failures.c:866 No locals. #5 0x00007f5d1322a25a in i_panic (format=format@entry=0x7f5d1331a730 "file %s: line %d (%s): assertion failed: (%s)") at failures.c:523 ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, timestamp_usecs = 0, log_prefix = 0x0, log_prefix_type_pos = 0} args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7ffdccb024d0, reg_save_area = 0x7ffdccb02410}} #6 0x00007f5d132729ad in http_client_request_send_more (req=req@entry=0x561e524c2a68, pipelined=pipelined@entry=false) at http-client-request.c:1240 conn = 0x561e524aa5f0 cctx = 0x561e52427fa8 output = 0x561e524abf60 res = <optimized out> error = <optimized out> offset = <optimized out> __func__ = "http_client_request_send_more" #7 0x00007f5d13277101 in http_client_connection_continue_request (conn=0x561e524aa5f0) at http-client-connection.c:1342 tmp_conn = 0x561e524aa5f0 count = 1 ret = <optimized out> reqs = <optimized out> req = 0x561e524c2a68 pipelined = false #8 http_client_connection_output (conn=0x561e524aa5f0) at http-client-connection.c:1374 output = <optimized out> ret = <optimized out> #9 0x00007f5d133022b0 in stream_send_io (fstream=0x561e524ab320) at ostream-file.c:520 ostream = 0x561e524ab3c0 iostream = 0x561e524ab320 use_cork = true ret = <optimized out> ---Type <return> to continue, or q <return> to quit--- #10 0x00007f5d132f0f45 in io_loop_call_io (io=0x561e524a4fb0) at ioloop.c:714 ioloop = 0x561e524c2840 t_id = 6 __func__ = "io_loop_call_io" #11 0x00007f5d132f28fb in io_loop_handler_run_internal (ioloop=ioloop@entry=0x561e524c2840) at ioloop-epoll.c:222 ctx = 0x561e524a4f80 events = <optimized out> list = 0x561e524ae350 io = <optimized out> tv = {tv_sec = 0, tv_usec = 0} events_count = <optimized out> msecs = <optimized out> ret = 1 i = 0 call = <optimized out> __func__ = "io_loop_handler_run_internal" #12 0x00007f5d132f1049 in io_loop_handler_run (ioloop=ioloop@entry=0x561e524c2840) at ioloop.c:766 __func__ = "io_loop_handler_run" #13 0x00007f5d132f1288 in io_loop_run (ioloop=0x561e524c2840) at ioloop.c:739 __func__ = "io_loop_run" #14 0x00007f5d1328020d in http_client_wait (client=0x561e52425f98) at http-client.c:339 prev_ioloop = 0x561e523f0ff0 client_ioloop = 0x561e524c2840 prev_client_ioloop = 0x561e523f0ff0 __func__ = "http_client_wait" #15 0x00007f5d11440174 in solr_connection_select (conn=0x561e52425ed0, query=<optimized out>, pool=<optimized out>, box_results_r=0x7ffdccb02780) at solr-connection.c:206 lctx = {result_pool = 0x561e5249dba0, payload = 0x0, io = 0x0, request_status = 0, parser = 0x0, results = 0x0} http_req = 0x561e5249dfd8 url = <optimized out> #16 0x00007f5d1143c5d4 in fts_backend_solr_get_last_uid (_backend=0x561e52425e10, box=0x561e52452538, last_uid_r=0x7ffdccb02824) at fts-backend-solr.c:237 hdr = {last_indexed_uid = 0, settings_checksum = 0, unused = 0} #17 0x00007f5d125af3fe in fts_backend_get_last_uid (backend=<optimized out>, box=box@entry=0x561e52452538, last_uid_r=last_uid_r@entry=0x7ffdccb02824) at fts-api.c:106 hdr = {last_indexed_uid = 3434097176, settings_checksum = 32765, unused = 325539906} #18 0x00007f5d125b5952 in fts_mailbox_get_last_cached_seq (box=box@entry=0x561e52452538, seq_r=seq_r@entry=0x7ffdccb02874) at fts-storage.c:85 flist = <optimized out> seq1 = 22046 seq2 = 1380210400 last_uid = 22046 ---Type <return> to continue, or q <return> to quit--- #19 0x00007f5d125b6ef6 in fts_mail_precache_init (_mail=<optimized out>, _mail=<optimized out>) at fts-storage.c:486 ft = 0x561e52499650 flist = 0x561e5241e928 last_seq = 22046 #20 fts_mail_index (_mail=0x561e524ad188) at fts-storage.c:507 ft = 0x561e52499650 flist = 0x561e5241e928 pmail = 0x561e524ad188 #21 fts_mail_precache (_mail=0x561e524ad188) at fts-storage.c:571 _data_stack_cur_id = 5 mail = 0x561e524ad188 fmail = <optimized out> ft = 0x561e52499650 __func__ = "fts_mail_precache" #22 0x00007f5d1239b66a in virtual_mail_precache (mail=<optimized out>) at virtual-mail.c:265 vmail = <optimized out> backend_mail = 0x561e524ad188 p = <optimized out> #23 0x00007f5d125b6ba6 in fts_mail_precache (_mail=0x561e5248e0e8) at fts-storage.c:564 mail = 0x561e5248e0e8 fmail = 0x561e5248e650 ft = 0x561e52458670 __func__ = "fts_mail_precache" #24 0x00007f5d135dbb0e in mail_precache (mail=0x561e5248e0e8) at mail.c:453 _data_stack_cur_id = 4 p = 0x561e5248e0e8 #25 0x0000561e504cb924 in index_mailbox_precache (conn=<optimized out>, box=0x561e52435a18) at master-connection.c:104 counter = 1 max = 172 percentage_sent = 0 storage = <optimized out> status = {messages = 172, recent = 0, unseen = 0, uidvalidity = 1622689531, uidnext = 173, first_unseen_seq = 0, first_recent_uid = 173, last_cached_seq = 0, highest_modseq = 0, highest_pvt_modseq = 0, keywords = 0x0, permanent_flags = 0, flags = 0, permanent_keywords = false, allow_new_keywords = false, nonpermanent_modseqs = false, no_modseq_tracking = false, have_guids = true, have_save_guids = true, have_only_guid128 = false} uids = <optimized out> username = 0x561e524158c8 "user4@email.com" first_uid = 1 percentage_str = "0\000\000" percentage = <optimized out> error = 1379909712 trans = 0x561e524641a0 ---Type <return> to continue, or q <return> to quit--- ctx = 0x561e52434ee0 last_uid = 2 ret = 0 box_vname = 0x561e52435db0 "Virtual/All" errstr = <optimized out> search_args = 0x0 mail = 0x561e5248e0e8 metadata = {guid = '\000' <repeats 15 times>, virtual_size = 0, physical_size = 0, first_save_date = 0, cache_fields = 0x0, precache_fields = (MAIL_FETCH_STREAM_HEADER | MAIL_FETCH_STREAM_BODY | MAIL_FETCH_RECEIVED_DATE | MAIL_FETCH_SAVE_DATE | MAIL_FETCH_PHYSICAL_SIZE | MAIL_FETCH_VIRTUAL_SIZE | MAIL_FETCH_UIDL_BACKEND | MAIL_FETCH_GUID | MAIL_FETCH_POP3_ORDER), backend_ns_prefix = 0x0, backend_ns_type = (unknown: 0)} seq = <optimized out> #26 index_mailbox (user=<optimized out>, user=<optimized out>, what=<optimized out>, max_recent_msgs=<optimized out>, mailbox=<optimized out>, conn=0x561e523fc070) at master-connection.c:207 box = 0x561e52435a18 status = {messages = 0, recent = 0, unseen = 733118976, uidvalidity = 3265295405, uidnext = 3434097280, first_unseen_seq = 32765, first_recent_uid = 1, last_cached_seq = 0, highest_modseq = 94688228871760, highest_pvt_modseq = 140037729927678, keywords = 0x7ffdccb02a80, permanent_flags = 1379864144, flags = 22046, permanent_keywords = false, allow_new_keywords = false, nonpermanent_modseqs = false, no_modseq_tracking = false, have_guids = true, have_save_guids = false, have_only_guid128 = true} errstr = <optimized out> ret = 0 ns = <optimized out> path = 0x561e523eb6d0 "/home/vmail/nas10/domains/R/email.com/user4/Maildir/virtual/All" error = 172 sync_flags = <optimized out> #27 master_connection_input_line (line=<optimized out>, conn=0x561e523fc070) at master-connection.c:249 str = <optimized out> max_recent_msgs = 0 ret = <optimized out> args = <optimized out> input = {parent_event = 0x0, module = 0x561e504cbfff "mail", service = 0x561e504cbff0 "indexer-worker", username = 0x561e523e8038 "user4@email.com", session_id = 0x0, session_id_prefix = 0x561e523e805b "x0EZ2dPDiuW5Ydlx", session_create_time = 0, local_ip = {family = 0, u = {ip6 = { __in6_u = {__u6_addr8 = '\000' <repeats 15 times>, __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = {__in6_u = { __u6_addr8 = '\000' <repeats 15 times>, __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, userdb_fields = 0x0, forward_fields = 0x0, flags_override_add = (unknown: 0), flags_override_remove = (unknown: 0), no_userdb_lookup = false, debug = false, conn_secured = false, conn_ssl_secured = false} service_user = 0x561e523ffe30 ---Type <return> to continue, or q <return> to quit--- user = 0x561e524157a8 error = 0x0 #28 master_connection_input (conn=0x561e523fc070) at master-connection.c:289 _data_stack_cur_id = 3 line = <optimized out> #29 0x00007f5d132f0f45 in io_loop_call_io (io=0x561e523fc0b0) at ioloop.c:714 ioloop = 0x561e523f0ff0 t_id = 2 __func__ = "io_loop_call_io" #30 0x00007f5d132f28fb in io_loop_handler_run_internal (ioloop=ioloop@entry=0x561e523f0ff0) at ioloop-epoll.c:222 ctx = 0x561e523f1c30 events = <optimized out> list = 0x561e523fc110 io = <optimized out> tv = {tv_sec = 2147483, tv_usec = 0} events_count = <optimized out> msecs = <optimized out> ret = 1 i = 0 call = <optimized out> __func__ = "io_loop_handler_run_internal" #31 0x00007f5d132f1049 in io_loop_handler_run (ioloop=ioloop@entry=0x561e523f0ff0) at ioloop.c:766 __func__ = "io_loop_handler_run" #32 0x00007f5d132f1288 in io_loop_run (ioloop=0x561e523f0ff0) at ioloop.c:739 __func__ = "io_loop_run" #33 0x00007f5d1325bbb3 in master_service_run (service=0x561e523f0e50, callback=callback@entry=0x561e504cb3b0 <client_connected>) at master-service.c:853 No locals. #34 0x0000561e504cb1f7 in main (argc=1, argv=0x561e523f0b90) at indexer-worker.c:76 storage_service_flags = (MAIL_STORAGE_SERVICE_FLAG_USERDB_LOOKUP | MAIL_STORAGE_SERVICE_FLAG_TEMP_PRIV_DROP | MAIL_STORAGE_SERVICE_FLAG_NO_IDLE_TIMEOUT) c = <optimized out> (gdb) exit
I hope can help to fix it.
Thanks
-- Alessio Cecchi Postmaster @ http://www.qboxmail.it https://www.linkedin.com/in/alessice
participants (2)
-
Aki Tuomi
-
Alessio Cecchi