Hi Maby stupid question :)
It possible to have 3 directors (frontend) but without rings ?
All directors connect to this same dovecot (backend) - all backad have this same login_trusted_networks
-- Maciej Miłaszewski IQ PL Sp. z o.o. Starszy Administrator Systemowy
On 6.3.2019 14.26, Maciej Milaszewski IQ PL via dovecot wrote:
Hi Maby stupid question :)
It possible to have 3 directors (frontend) but without rings ?
All directors connect to this same dovecot (backend) - all backad have this same login_trusted_networks
Why would you even use a director then?
Aki
Hi Thnex for replay. Because I had a problem with director cluster. Second time in 20 days I had crash dovecot/director. I use keepalive and with critical moment keepalive move VIP to second server.
For your advice I remove "broken director" from ring, but not working (default director frozen) And stop dovecot and kill dovecot/director - solved problem
# 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf # Pigeonhole version 0.4.24.rc1 (debaa297) # OS: Linux 3.16.0-0.bpo.4-amd64 x86_64 Debian 8.11
dmesg
[śro mar 6 08:48:06 2019] Node 0 DMA free:15896kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15980kB managed:15896kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes [śro mar 6 08:48:06 2019] lowmem_reserve[]: 0 3191 32178 32178 [śro mar 6 08:48:06 2019] Node 0 DMA32 free:133464kB min:4456kB low:5568kB high:6684kB active_anon:510560kB inactive_anon:4kB active_file:1192668kB inactive_file:1192680kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3345344kB managed:3270788kB mlocked:0kB dirty:2156kB writeback:16kB mapped:1624kB shmem:92kB slab_reclaimable:46592kB slab_unreclaimable:59676kB kernel_stack:9328kB pagetables:2268kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [śro mar 6 08:48:06 2019] lowmem_reserve[]: 0 0 28987 28987 [śro mar 6 08:48:06 2019] Node 0 Normal free:150456kB min:40488kB low:50608kB high:60732kB active_anon:5141796kB inactive_anon:0kB active_file:11706452kB inactive_file:11700840kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:30146560kB managed:29682736kB mlocked:0kB dirty:116324kB writeback:252kB mapped:12284kB shmem:168kB slab_reclaimable:513724kB slab_unreclaimable:282312kB kernel_stack:4048kB pagetables:29124kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [śro mar 6 08:48:06 2019] lowmem_reserve[]: 0 0 0 0 [śro mar 6 08:48:06 2019] Node 1 Normal free:434520kB min:45140kB low:56424kB high:67708kB active_anon:4234868kB inactive_anon:2364kB active_file:13623364kB inactive_file:13921736kB unevictable:0kB isolated(anon):0kB isolated(file):60kB present:33554432kB managed:33092420kB mlocked:0kB dirty:98120kB writeback:0kB mapped:18412kB shmem:2868kB slab_reclaimable:455768kB slab_unreclaimable:55264kB kernel_stack:6160kB pagetables:40952kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [śro mar 6 08:48:06 2019] lowmem_reserve[]: 0 0 0 0 [śro mar 6 08:48:06 2019] Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (R) 3*4096kB (M) = 15896kB [śro mar 6 08:48:06 2019] Node 0 DMA32: 25241*4kB (UEM) 4008*8kB (EM) 1*16kB (R) 2*32kB (R) 0*64kB 1*128kB (R) 1*256kB (R) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 133492kB [śro mar 6 08:48:06 2019] Node 0 Normal: 33839*4kB (UEM) 1509*8kB (UEM) 4*16kB (EM) 1*32kB (R) 1*64kB (R) 1*128kB (R) 1*256kB (R) 1*512kB (R) 1*1024kB (R) 1*2048kB (R) 0*4096kB = 151556kB [śro mar 6 08:48:06 2019] Node 1 Normal: 30859*4kB (EM) 38464*8kB (UM) 20*16kB (UM) 11*32kB (UR) 2*64kB (R) 5*128kB (R) 3*256kB (R) 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 434892kB [śro mar 6 08:48:06 2019] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [śro mar 6 08:48:06 2019] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [śro mar 6 08:48:06 2019] 13336641 total pagecache pages [śro mar 6 08:48:06 2019] 0 pages in swap cache [śro mar 6 08:48:06 2019] Swap cache stats: add 0, delete 0, find 0/0 [śro mar 6 08:48:06 2019] Free swap = 0kB [śro mar 6 08:48:06 2019] Total swap = 0kB [śro mar 6 08:48:06 2019] 16765579 pages RAM [śro mar 6 08:48:06 2019] 0 pages HighMem/MovableOnly [śro mar 6 08:48:06 2019] 115503 pages reserved [śro mar 6 08:48:06 2019] 0 pages hwpoisoned [śro mar 6 08:48:06 2019] netstat: page allocation failure: order:2, mode:0x204020 [śro mar 6 08:48:06 2019] CPU: 38 PID: 44226 Comm: netstat Not tainted 3.16.0-5-amd64 #1 Debian 3.16.51-3+deb8u1 [śro mar 6 08:48:06 2019] Hardware name: Dell Inc. PowerEdge R620/01W23F, BIOS 2.6.1 02/12/2018 [śro mar 6 08:48:06 2019] 0000000000000000 ffffffff8151f937 0000000000204020 ffff88082fc63b60 [śro mar 6 08:48:06 2019] ffffffff81148d8f 0000000000000000 ffffffff818ea8b0 ffff880800000002 [śro mar 6 08:48:06 2019] 0000000100000041 ffff88082fffcc00 0000000000000000 0000000000000000 [śro mar 6 08:48:06 2019] Call Trace: [śro mar 6 08:48:06 2019] <IRQ> [<ffffffff8151f937>] ? dump_stack+0x5d/0x78 [śro mar 6 08:48:06 2019] [<ffffffff81148d8f>] ? warn_alloc_failed+0xdf/0x130 [śro mar 6 08:48:06 2019] [<ffffffff8114d0ff>] ? __alloc_pages_nodemask+0x8ef/0xb50 [śro mar 6 08:48:06 2019] [<ffffffff811943cb>] ? kmem_getpages+0x5b/0x110 [śro mar 6 08:48:06 2019] [<ffffffff8119580f>] ? fallback_alloc+0x1cf/0x210 [śro mar 6 08:48:06 2019] [<ffffffffa031c51d>] ? tg3_alloc_rx_data+0x6d/0x260 [tg3] [śro mar 6 08:48:06 2019] [<ffffffff81197a97>] ? __kmalloc+0x227/0x4e0 [śro mar 6 08:48:06 2019] [<ffffffffa031c51d>] ? tg3_alloc_rx_data+0x6d/0x260 [tg3] [śro mar 6 08:48:06 2019] [<ffffffffa031c51d>] ? tg3_alloc_rx_data+0x6d/0x260 [tg3] [śro mar 6 08:48:06 2019] [<ffffffffa031e171>] ? tg3_poll_work+0x471/0xeb0 [tg3] [śro mar 6 08:48:06 2019] [<ffffffffa031ebe5>] ? tg3_poll_msix+0x35/0x140 [tg3] [śro mar 6 08:48:06 2019] [<ffffffff814307f9>] ? net_rx_action+0x129/0x250 [śro mar 6 08:48:06 2019] [<ffffffff8106f461>] ? __do_softirq+0xf1/0x2d0 [śro mar 6 08:48:06 2019] [<ffffffff8106f875>] ? irq_exit+0x95/0xa0 [śro mar 6 08:48:06 2019] [<ffffffff81528da2>] ? do_IRQ+0x52/0xe0 [śro mar 6 08:48:06 2019] [<ffffffff815268c1>] ? common_interrupt+0x81/0x81 [śro mar 6 08:48:06 2019] <EOI> Mem-Info: [śro mar 6 08:48:06 2019] Node 0 DMA per-cpu: [śro mar 6 08:48:06 2019] CPU 0: hi: 0, btch: 1 usd: 0 [śro mar 6 08:48:06 2019] CPU 1: hi: 0, btch: 1 usd: 0 [śro mar 6 08:48:06 2019] CPU 2: hi: 0, btch: 1 usd: 0 [śro mar 6 08:48:06 2019] CPU 3: hi: 0, btch: 1 usd: 0
śro mar 6 08:48:06 2019] imap-login: page allocation failure: order:2, mode:0x204020 [śro mar 6 08:48:06 2019] CPU: 38 PID: 5656 Comm: imap-login Not tainted 3.16.0-5-amd64 #1 Debian 3.16.51-3+deb8u1 [śro mar 6 08:48:06 2019] Hardware name: Dell Inc. PowerEdge R620/01W23F, BIOS 2.6.1 02/12/2018 [śro mar 6 08:48:06 2019] 0000000000000000 ffffffff8151f937 0000000000204020 ffff88082fc63b98 [śro mar 6 08:48:06 2019] ffffffff81148d8f 0000000000000000 ffffffff818ea8b0 ffff880800000002 [śro mar 6 08:48:06 2019] 000000012fffbe00 ffff88082fffcc00 0000000000000046 0000000000000001 [śro mar 6 08:48:06 2019] Call Trace: [śro mar 6 08:48:06 2019] <IRQ> [<ffffffff8151f937>] ? dump_stack+0x5d/0x78 [śro mar 6 08:48:06 2019] [<ffffffff81148d8f>] ? warn_alloc_failed+0xdf/0x130 [śro mar 6 08:48:06 2019] [<ffffffff8114d0ff>] ? __alloc_pages_nodemask+0x8ef/0xb50 [śro mar 6 08:48:06 2019] [<ffffffff811943cb>] ? kmem_getpages+0x5b/0x110 [śro mar 6 08:48:06 2019] [<ffffffff8119580f>] ? fallback_alloc+0x1cf/0x210 [śro mar 6 08:48:06 2019] [<ffffffffa031c51d>] ? tg3_alloc_rx_data+0x6d/0x260 [tg3] [śro mar 6 08:48:06 2019] [<ffffffff81197a97>] ? __kmalloc+0x227/0x4e0 [śro mar 6 08:48:06 2019] [<ffffffffa031c51d>] ? tg3_alloc_rx_data+0x6d/0x260 [tg3] [śro mar 6 08:48:06 2019] [<ffffffffa031c51d>] ? tg3_alloc_rx_data+0x6d/0x260 [tg3] [śro mar 6 08:48:06 2019] [<ffffffffa031e171>] ? tg3_poll_work+0x471/0xeb0 [tg3] [śro mar 6 08:48:06 2019] [<ffffffffa031ebe5>] ? tg3_poll_msix+0x35/0x140 [tg3] [śro mar 6 08:48:06 2019] [<ffffffff814307f9>] ? net_rx_action+0x129/0x250 [śro mar 6 08:48:06 2019] [<ffffffff810a786f>] ? run_rebalance_domains+0x3f/0x190 [śro mar 6 08:48:06 2019] [<ffffffff8106f461>] ? __do_softirq+0xf1/0x2d0 [śro mar 6 08:48:06 2019] [<ffffffff81527a5c>] ? do_softirq_own_stack+0x1c/0x30 [śro mar 6 08:48:06 2019] <EOI> [<ffffffff8106f6dd>] ? do_softirq+0x4d/0x60 [śro mar 6 08:48:06 2019] [<ffffffff8106f774>] ? __local_bh_enable_ip+0x84/0x90 [śro mar 6 08:48:06 2019] [<ffffffff8147681c>] ? tcp_recvmsg+0x4c/0xc40 [śro mar 6 08:48:06 2019] [<ffffffff810a0676>] ? set_next_entity+0x56/0x70 [śro mar 6 08:48:06 2019] [<ffffffff810a74a1>] ? pick_next_task_fair+0x6e1/0x820 [śro mar 6 08:48:06 2019] [<ffffffff810135dc>] ? __switch_to+0x15c/0x5a0 [śro mar 6 08:48:06 2019] [<ffffffff8149f44a>] ? inet_recvmsg+0x6a/0x80 [śro mar 6 08:48:06 2019] [<ffffffff8141607e>] ? sock_aio_read.part.7+0xfe/0x120 [śro mar 6 08:48:06 2019] [<ffffffff811af49c>] ? do_sync_read+0x5c/0x90 [śro mar 6 08:48:06 2019] [<ffffffff811afd45>] ? vfs_read+0x135/0x170 [śro mar 6 08:48:06 2019] [<ffffffff811b08d2>] ? SyS_read+0x42/0xa0 [śro mar 6 08:48:06 2019] [<ffffffff81525c00>] ? system_call_fast_compare_end+0x10/0x15
now I changed memory and tunning min_free_kbytes in kernel
My question is about soultions keeplive+haproxy
1)director1-----ring----director2 2)director3 - standalone as backup
Options 1 and 2 have connect to 5 backend dovecot
On 07.03.2019 08:43, Aki Tuomi via dovecot wrote:
On 6.3.2019 14.26, Maciej Milaszewski IQ PL via dovecot wrote:
Hi Maby stupid question :)
It possible to have 3 directors (frontend) but without rings ?
All directors connect to this same dovecot (backend) - all backad have this same login_trusted_networks
Why would you even use a director then?
Aki
-- Maciej Miłaszewski IQ PL Sp. z o.o. Starszy Administrator Systemowy
Biuro Obsługi Klienta: e-mail: bok@iq.pl tel.: +48 58 326 09 90 - 94 fax: +48 58 326 09 99
Dział pomocy: https://www.iq.pl/pomoc Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt Jakość gwarantuje: ISO 9001:2000
IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 0000007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853
participants (2)
-
Aki Tuomi
-
Maciej Milaszewski IQ PL