Zitat von Stan Hoeppner <stan@hardwarefreak.com>:
On 5/2/2013 8:12 AM, lst_hoe02@kwsoft.de wrote:
IMHO if you say "VM" than the filesystem inside the guest doesn't matter that much.
Malarky.
If you are going to insult you maybe should write it so non native
speakers could find it (malarkey).
The difference of ext4/xfs are mostly the knowledge and adjustability for special (high-end) hardware and the like. With a
XFS doesn't require "high end" hardware to demonstrate its advantages over EXT4. In his LCA 2012 presentation on XFS development, Dave Chinner showed data from IIRC a 12 disk RAID0 array, which is hardly high end. Watch the presentation and note the massive lead XFS has over EXT4 (and BTRFS) in most areas. The performance gap is quite staggering. You'll see the same performance, and differences, in a VM or on bare hardware.
It is not stunningly that a developer of XFS come out with a setup
where XFS is the fastest at all.
Hypervisor providing some standard I/O channel and hiding/handling the hardware details itself, most of the differences are gone. With this in
Again, malarky. The parallel performance in XFS resides in multiple threads and memory structures, b+ trees, and how these are executed and manipulated, and via the on disk layout of AGs and how they're written to in parallel. Virtualization doesn't change nor limit any of this. The block device driver, not the filesystem, talks through the hypervisor to the hardware. No hypervisor imposes limits on XFS parallelism or performance, nor block device drivers. Some may be configured to prioritize IO amongst guests, but that's a different issue entirely.
While it might be true that XFS threading and the
non-blocking/parallel design will gain some benefit, it is no longer
true for all points regarding "disk" layout or estimate of i/o
channels and disk spindles.
Worthy of note here is that nearly all XFS testing performed by the developers today is done within virtual machines on filesystems that reside within sparse files atop another XFS filesystem--not directly on hardware. According to you, this double layer of virtualization, OS and filesystem, would further eliminate all meaningful performance differences between XFS and EXT4. Yet this is not the case at all because EXT4 doesn't yet handle sparse files very well, so the XFS lead increases.
So you have confirmed may suspection that XFS developers will find a
case where it matters in favour of XFS ;-)
In real world VM deployments most of the time there are vmfs volumes
(VMWare) underneath, or NTFS (Hyper-V) and in many cases these are
even taken from some form of SAN device doing its own mapping of fs
blocks to physical blocks. With this a careful choosen disk layout
inside the guest doesn't matter at all, if the Hypervisor does not or
can not map this useful to the hardware.
mind your question should maybe more of "what filesystem is more Hypervisor friendly". For this i would suspect the simpler the better, so i would choose ext4.
Again, malarky. The hypervisor imposes no limits on filesystem performance, other than the CPU cycles, scheduling, and RAM overhead of the hypervisor itself. I.e. the same things imposed on all aspects of guest operation.
You have forgotten that the Hypervisor also provide only a standard
device "API" for the I/O channel which limits the possibility to do
any hardware estimate/optimization inside the guest. So many
traditional performance tweaks don't work as expected like physical
block layout or alignment. The more "far away" in terms of layers you
are from hardware the more difficult it get to optimize i/o speed with
the traditional approaches. You can proof this by the myriand of
benchmarks flying around all have another clear winner dependent on
who has done the benchmark.
I know your history on insisting your are right in any cases, so this
is my last post on this subject. Every reader should try to understand
the differences on his/her own anyway.
Regards
Andreas