[]'sf.rique
On Fri, Jan 21, 2011 at 8:06 PM, Stan Hoeppner stan@hardwarefreak.comwrote:
Henrique Fernandes put forth on 1/21/2011 12:53 PM:
We think it is the ocfs2 and the size of the partition, becasue. We can write an big file in a accetable speed. But if we try to delete or create or read lots of small files the speed is horrible. We think is an DLM problem in propagate the locks and etc.
It's not the size of the filesystem that's the problem. But it is an issue with the DLM, and with the small RAID 10 set. This is why I recommended putting DLM on its own dedicated network segment, same with the iSCSI traffic, and making sure you're running full duplex GbE all round. DLM doesn't require GbE bandwidth, but the latency of GbE is less than fast ethernet. I'm also assuming, since you didn't say, that you were running all your ethernet traffic over a single GbE port on each Xen host. That just doesn't scale when doing filesystem clustering. The traffic load is too great, unless you're idling all the time, in which case, why did you go OCFS? :)
Yeah, pretty much all trafinc is on 2 GbE on xen hosts. The trafic is not high, on the xen host is about 30% iof the link inm hush time.. I did not get your question, why did i go to ocfs ? Only need an clustred file system to make may host to help in HA nd performance.
Do you have any idea how to test the storage from maildir usage ? We made a bashscript that write some diretores and lots of files and after it removes and etc.
This only does you any good if you have instrumentation setup to capture metrics while you run your test. You''ll need to run iostat on the host running the script tests, along with iftop, and any OCFS monitoring tools. You'll need to use the EMC software to gather IOPS and bandwidth metrics from the CX4 during the test. You'll also need to make sure your aggregate test data size is greater than 6GB which is 2x the size of the cache in the CX4. You need to hit the disks, hard, not the cache.
The emc told us that once you start the analyizer it makes the performqance MUCH WORSE so we are not ocnsidering using it right now. But thanks for the tips.
The best "test" is to simply instrument your normal user load and collect the performance data I mentioned.
Any better ideias ?
Ditch iSCSI and move to fiber channel. A Qlogic 14 port 4Gb FC switch with all SFPs included is less than $2500 USD. You already have the FC ports in your CX4. You'd instantly quadruple the bandwidth of the CX4 and that of each Xen host, from 200 to 800 MB/s and 100 to 400 MB/s respectively. Four single port 4Gb FC HBAs, one for each server, will run you $2500-3000 USD. So for about $5k USD you can quadruple your bandwidth, and lower your latency.
I am form brazil, stuff her is a little more expensive than this. And still, were i worked not easy to get money to buy ardware.
I don't recall if you ever told us what your user load is. How many concurrent Dovecot user sessions are you supporting on average?
Last time i checked in my IPVS server ( the one that balance between the nodes with ocfs2 )
Was some thing like 35 active connections each host and about 180 inActConn each host also.
If i run doveadm in each server it gives me about 25 users in each host, but as most of system is webmail, it connects and disconects pretty fast as IMAP protocol tell us to do it, so doveadm keeps changing a lot the numbers.
Apreciate your help!
No problem. SANs are one of my passions. :)
-- Stan