Stan!
Sorry i did not explained well!
FULL
Spool to disk: ~24h TransferRate: 6MB/s
Despool to tape: ~7h TransferRate: 16MB/s
INCREMENTAL
Spool to disk: ~11h TransferRate: 300KB/s
Despool to tape: ~12m TransferRate: 16MB/s
When doind a backup, we turn on another machine in the ocfs2 cluster and from there make an spool in the disk and after it it goes from the disk to the tape.
Nothing is in SAN everthing in dlinks swith at 1Gbit.
Even the storage system is not SUN those ocfs2 servers are connect via iSCSI from the storage with ocfs2 in virtual machine
Sorry, my english such and make it harder to explain!
[]'sf.rique
On Thu, Jan 20, 2011 at 3:17 PM, Jan-Frode Myklebust janfrode@tanso.netwrote:
On Thu, Jan 20, 2011 at 5:20 PM, Henrique Fernandes sf.rique@gmail.com wrote:
Not all, if this counts as large:
Filesystem Size Used Avail Use% Mounted on /dev/gpfsmail 9.9T 8.7T 1.2T 88% /maildirs Filesystem Inodes IUsed IFree IUse% Mounted on /dev/gpfsmail 105279488 90286634 14992854 86% /maildirs
how do you backup that data? :)
Same question!
I have about 1TB used and it takes 22 hrs to backup maildirs!
Our maildirs are spread in subfolders under /maildirs/[a-z0-9], where mail addresses starting with a is stored under /maildirs/a/, b in /maildirs/b, etc.. and then we have distributed these top-level directories about evenly for backup by each host. So the 7 servers all run backups of different parts of the filesystem. The backups go to Tivoli Storage Manager, with it´s default incremental forever policy, so there´s not much data to back up. The problem is that it´s very slow to traverse all the directories and compare against what was already backed up. I believe we´re also using around 20-24 hours for the daily incremental backups... so we soon will have to start looking at alternative ways of doing it (or get rid of the non-dovecot accesses to maildirs, which are probably stealing quite a bit performance from the file scans).
One alternative is the "mmbackup"-utility, which is supposed to use a much faster inode scan interface in GPFS:
http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fco...
but last time we tested it it was a too fragile...
-jf