This is again about real world numbers (which I like so much for being authentic ;-). The context being the throughput and time overhead of a loop device, as a poor or late man’s replacement for a real disk partition, jep I know, on copy-on-write filesystem btrfs, exclusively dedicated for logging, that is appending over and over. Why? The loop device may overflow with data without affecting the underlying filesystem, see Btrfs subvolume quota still in its infancy with btrfs version 4.2.2 for more why’s and what I tried to get btrfs
quota to work. By the way, about that though, see debian org Btrfs for a down-to-earth assessment of btrfs to date, even uttering a recommendation from what version number (4.4) to start off at the earliest near production. Anyway, what follows, adapts a test setup as in Performance of loopback filesystems, prime credits go there, and expands the layout somewhat for the btrfs
nodatacow flag. Here we go.
Have this baseline, if you like, test on the raw iron. Well, its not raw iron really, its vmware and tons of storage below, and the shown performance is terrible I know and I only take one testset of
count but that won’t matter. There’s something to start off.
mkdir /tmp/loop0 dd if=/dev/zero bs=1M of=/tmp/loop0/file oflag=sync count=1000 1048576000 bytes (1.0 GB) copied, 8.9605 s, 117 MB/s 1048576000 bytes (1.0 GB) copied, 6.52867 s, 161 MB/s 1048576000 bytes (1.0 GB) copied, 5.35716 s, 196 MB/s 1048576000 bytes (1.0 GB) copied, 5.48745 s, 191 MB/s 1048576000 bytes (1.0 GB) copied, 5.14736 s, 204 MB/s