append

Log-dedicated loop device throughput and time overhead on btrfs 4.x

This is again about real world numbers (which I like so much for being authentic ;-). The context being the throughput and time overhead of a loop device, as a poor or late man’s replacement for a real disk partition, jep I know, on copy-on-write filesystem btrfs, exclusively dedicated for logging, that is appending over and over. Why? The loop device may overflow with data without affecting the underlying filesystem, see Btrfs subvolume quota still in its infancy with btrfs version 4.2.2 for more why’s and what I tried to get btrfs subvolume with quota to work. By the way, about that though, see debian org Btrfs for a down-to-earth assessment of btrfs to date, even uttering a recommendation from what version number (4.4) to start off at the earliest near production. Anyway, what follows, adapts a test setup as in Performance of loopback filesystems, prime credits go there, and expands the layout somewhat for the btrfs C or nodatacow flag. Here we go.

Have this baseline, if you like, test on the raw iron. Well, its not raw iron really, its vmware and tons of storage below, and the shown performance is terrible I know and I only take one testset of bs / count but that won’t matter. There’s something to start off.

mkdir /tmp/loop0
dd if=/dev/zero bs=1M of=/tmp/loop0/file oflag=sync count=1000
1048576000 bytes (1.0 GB) copied, 8.9605 s, 117 MB/s
1048576000 bytes (1.0 GB) copied, 6.52867 s, 161 MB/s
1048576000 bytes (1.0 GB) copied, 5.35716 s, 196 MB/s
1048576000 bytes (1.0 GB) copied, 5.48745 s, 191 MB/s
1048576000 bytes (1.0 GB) copied, 5.14736 s, 204 MB/s

(more…)

Oracle global temp tabs or nologging/append – saved redo in numbers

In an attempt to reduce redo entries (and archive space) generated for only temporarily computed contents of a couple of simple tables, I regarded employing Global Temporary Tables as well as Nologging Inserts as redesign options. This article, however, is not meant to dive into the details of these redesign options at all. There’s plenty of examples and discussions about the pros and cons of each option around the net. Most notably to mention, and a stimulus to post just my redo reduction test results, is Tim Hall’s compact articles on the subject like https://oracle-base.com/articles/misc/temporary-tables and https://oracle-base.com/articles/misc/append-hint that do include an assessment of the redo matter.

So, what’s the lineup for short? On startup, some data will be sourced throughout the database, losely computed and brutely written into a couple of simple tables. Then some third party will read this data for own purposes and will report completion such that the data can eventually be discarded completely. In making shure that an iteration step always starts unsoiled, any data left over from a crashed precursor will be rubbed out as well. Nobody cares about backups of this data, iteration management runs on other tables. You see, this is not really something that takes a database to complete successfully but…

(more…)