Solr connection timeout hardwired to 60s
M. Balridge
dovecot at r.paypc.com
Fri Apr 5 03:42:08 EEST 2019
> I'm a denizen of the solr-user at lucene.apache.org mailing list.
> [...]
> Here's a wiki page that I wrote about that topic. This wiki is going
> away next month, but for now you can still access it:
>
> https://wiki.apache.org/solr/SolrPerformanceProblems
That's a great resource, Shawn.
I am about to put together a test case to provide a comprehensive FTS setup
around Dovecot with a goal towards exposing proximity keyword searching, with
email silos containing tens of terabytes (most of the "bulk" is represented by
attachments, each of which get processed down to plaintext, if possible).
Figure thousands of users with decades of email (80,000 to 750,000) emails per
user).
My main background is in software engineering (C/C++/Python/Assembler), but I
have been forced into system admin tasks during many stretches of my work. I
do vividly remember the tedium of dealing with JAVA and GC, tuning it to avoid
stalls, and its ravenous appetite for RAM.
It looks like those problems are still with us, many versions later. For
corporations with infinite budgets, throwing lots of crazy money at the
problem is "fine" (>1TB RAM, all PCIe SSDs, etc), but I am worried that I will
be shoved forcefully into a wall of having to spend a fortune just to keep FTS
performing reasonably well before I even get to the 10,000 user mark.
I realise the only way to keep performance reasonable is to heavily shard the
index database, but I am concerned about how well the process works in
practice without needing a great deal of sysadmin hand-holding. I would
ideally prefer the decisions of how/where to shard be based on
volume/heuristics than something that is done manually. I realise that a human
will be necessary to add more hardware to the pools, but what are my options
for scaling the system by orders of magnitude?
What is a general rule of thumb for RAM and SSD disk requirements as a
fraction of indexed document hive size to keep query performance at 200ms or
less? How do people deal with the JAVA GC world-stoppages, other than simply
doubling or tripling every instance?
I am wondering how well alternatives to Solr work in these situations
(ElasticSearch, Xapian, and any others I may have missed).
Regards,
=M=
More information about the dovecot
mailing list