Log-dedicated loop device throughput and time overhead on btrfs 4.x

This is again about real world numbers (which I like so much for being authentic😉. The context being the throughput and time overhead of a loop device, as a poor or late man’s replacement for a real disk partition, jep I know, on copy-on-write filesystem btrfs, exclusively dedicated for logging, that is appending over and over. Why? The loop device may overflow with data without affecting the underlying filesystem, see Btrfs subvolume quota still in its infancy with btrfs version 4.2.2 for more why’s and what I tried to get btrfs subvolume with quota to work. By the way, about that though, see debian org Btrfs for a down-to-earth assessment of btrfs to date, even uttering a recommendation from what version number (4.4) to start off at the earliest near production. Anyway, what follows, adapts a test setup as in Performance of loopback filesystems, prime credits go there, and expands the layout somewhat for the btrfs C or nodatacow flag. Here we go.

Have this baseline, if you like, test on the raw iron. Well, its not raw iron really, its vmware and tons of storage below, and the shown performance is terrible I know and I only take one testset of bs / count but that won’t matter. There’s something to start off.

mkdir /tmp/loop0
dd if=/dev/zero bs=1M of=/tmp/loop0/file oflag=sync count=1000
1048576000 bytes (1.0 GB) copied, 8.9605 s, 117 MB/s
1048576000 bytes (1.0 GB) copied, 6.52867 s, 161 MB/s
1048576000 bytes (1.0 GB) copied, 5.35716 s, 196 MB/s
1048576000 bytes (1.0 GB) copied, 5.48745 s, 191 MB/s
1048576000 bytes (1.0 GB) copied, 5.14736 s, 204 MB/s


Investigating DBV-00201: Block, DBA number, marked corrupt for invalid redo application

More or less forced😉 by the procedures discussed in my last post (Rman duplicating an oracle 11g database in active state on the same host), this post will investigate how to verify and fix data corruption induced by RMan trying to restore nologging objects. Indeed, we know, there’s no actual restore of nologging objects, since there’s no cold or hot redo to process. But what does this mean in practice? Me, I learned now. Let’s have a look.

Ok, there’re some (partitioned) tables in the database duplicated, previous post, that for performance reasons (see : Oracle global temp tabs or nologgingappend saved redo in numbers) have been set nologging. I won’t explain the why here, that’s another subject, but, however, the first post-duplicate database backup showed up with block corruption errors in Quest’s Backup Reporter for Oracle Community. I today examined the overall database integrity status with a rman validate, verified the affected tablespace, but rman list failures, asking Data Recovery Advisor under the covers, did not seem to be up to complain about anything. We see that 10 blocks have been marked corrupt but the file check status is ok though.

RMAN> validate check logical database;
Starting validate at 22.07.2016-08:55:54
allocated channel: ORA_DISK_1
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
31   OK     10             58357        557056          8784584429990
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              265313
  Index      0              223903
  Other      0              9483
File Type    Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE       OK     0              2
Control File OK     0              778
Finished validate at 22.07.2016-08:59:46

RMAN> list failure all;
no failures found that match specification


Rman duplicating an oracle 11g database in active state on the same host

I already published a post about rman> duplicate... a couple of years ago (Rman duplicating an oracle database in a new sid two host scenario), still on 10g at that time and using a backup set being transfered to a new host. With 11g, however, rman> duplicate... offers another option to not only restore from a source backup, leaving source online, but from an up and running database (with takes archivelog mode and some rman catalogued entries in the control file or catalogue nevetheless). Following below therefore, I’m going to show the do’s for an rman> duplicate ... from active database... in a same host scenario, on windows again, using orapwd and oradim as well as lsnrctl this time. The main difference, however, is employing the spfile clause of the duplicate command, such that rman will set up the destination spfile on its own. Only some file name mappings, actually like before, need to be specified. My main reference to review any new features was Duplicating a Database from the oracle 11g1 docs, other references, concerning errors that showed up underway, will be given below.

Ok, working on the same host, nothing is due to be done for software installation and stuff and we can immediately set up the new instance (note that source will be denoted tgt, for target and the destination aux for auxiliary, respectively). Firstly, we create a new password file for destination, with the same sysdba password as on source.

cd /d e:\oracle\product\11.2.0\dbhome_1\database
orapwd file=PWDAUX.ora ignorecase=y force=y


Btrfs subvolume quota still in its infancy with btrfs version 4.2.2

Ever tried to dive into the btrfs subvolume topic, especially in combination with quotas (not snapshots here)? Looks really promising… administring subvolumes in hierarchies automagically offers managing quotas on (summed up) top- and on (dedicated) sub-levels by design, see : Btrfs SysadminGuide Subvolumes or Btrfs: Subvolumes and snapshots for example. With the later 4.x kernels there is btrfs 4.2.2, representing a huge step forward in btrfs development, so thought to give it another try on a red hat / oracle uek based 7.2 system.

Following, I’m going to show what I attempted to achive, the how-to’s, the workarounds I tried and, intermixed, the quite odd behaviour that I observed. Odd to an magnitude, that makes me recommed everyone to stay averse against employing this promising but still semifinished (?) technology.

The setup of a subvolume, dedicated for quota control is quite easy and takes only a couple of keystrokes. Understand though, that quota control with btrfs can only be enabled throughout the entire filesystem but can then be set to a value dedicatetly.