ext4

Splitting a btrfs 4.1x root partition with debian live system gparted btrfs tools 3.17-x

This is a short picture log of doing a btrfs 4.1x root partition split on a (down) oracle linux 7.2 using a debian live system applying gparted based on btrfs tools 3.17-x. Lot’s of names and version codecs, right? But this is what matters. The important message is : it works using this flavours.
Actually, running oracle or redhat linux as the live system may have been much more appropriate concerning compatibility reasons. The odd things is, no redhat-based (enterprise) linux system features gparted. Only fedora does, sourcing the epel-repository but not having kinf of a live system release as debian.

(more…)

Log-dedicated loop device throughput and time overhead on btrfs 4.x

This is again about real world numbers (which I like so much for being authentic ;-). The context being the throughput and time overhead of a loop device, as a poor or late man’s replacement for a real disk partition, jep I know, on copy-on-write filesystem btrfs, exclusively dedicated for logging, that is appending over and over. Why? The loop device may overflow with data without affecting the underlying filesystem, see Btrfs subvolume quota still in its infancy with btrfs version 4.2.2 for more why’s and what I tried to get btrfs subvolume with quota to work. By the way, about that though, see debian org Btrfs for a down-to-earth assessment of btrfs to date, even uttering a recommendation from what version number (4.4) to start off at the earliest near production. Anyway, what follows, adapts a test setup as in Performance of loopback filesystems, prime credits go there, and expands the layout somewhat for the btrfs C or nodatacow flag. Here we go.

Have this baseline, if you like, test on the raw iron. Well, its not raw iron really, its vmware and tons of storage below, and the shown performance is terrible I know and I only take one testset of bs / count but that won’t matter. There’s something to start off.

mkdir /tmp/loop0
dd if=/dev/zero bs=1M of=/tmp/loop0/file oflag=sync count=1000
1048576000 bytes (1.0 GB) copied, 8.9605 s, 117 MB/s
1048576000 bytes (1.0 GB) copied, 6.52867 s, 161 MB/s
1048576000 bytes (1.0 GB) copied, 5.35716 s, 196 MB/s
1048576000 bytes (1.0 GB) copied, 5.48745 s, 191 MB/s
1048576000 bytes (1.0 GB) copied, 5.14736 s, 204 MB/s

(more…)

Extending lvm mounts in oracle linux on vmware, part 2: a larger disk partition

This is part #2 of a two part series about scenarios of extending lvm (logical volume manager 2) mounts in an oel (red hat 7) guest running on vmare. Part #1 (Extending lvm mounts in oracle linux on vmware, part 1: a new disk device) discussed a scenario of a new disk being added to the guest by behalf of the guest settings in the vsphere client. This part, however, follows up with the case that an existing disk has been resized, that is extended, touching the underlying disk file in vsphere client. I already mentioned that the disk file extension case will be much more costly both in tackling and downtime, so try to avoid it, go and ask your admins to always consent to adding new disks (or even have a smarter storage approach).

However, whatever comes around… Though, the point or question finally is: what downtime, aside from a lot more typing, will this scenario take. The (relatively) good news is that only the depicted lvm mount (see part #1 for an explanation) will need a short offline such as any apps, accessing the lvm mount, will need to be shortly offlined too. No guest bounce or any 3rd apps downtime necessary.

Again, a sum up of required step looks like this.

  • introduce the new disk geometry to the guest os
  • extend the partition on the existing disk
  • offline affected apps / the lvm mount
  • notify the kernel abount the partition change
  • online affected apps / the lvm mount again
  • integrate the new disk into the lvm mount
  • extend the filesystem managed by lvm mount

Yet, again, this post will also attempt to gain as much understanding as possible about what’s going on under the covers and therefore supplies a lot of information for verfication purposes. These code boxes will (shall) be closed on page load and will feature an explicit title, indicating an optional step. In this example, an existing disk /dev/sdc will be extended by just 10gb for testing.

(more…)

Extending lvm mounts in oracle linux on vmware, part 1: a new disk device

This is part #1 of a two part series about scenarios of extending lvm (logical volume manager 2) mounts in an oel (red hat 7) guest running on vmare. Starting up, the scenario in question is a new disk being added to the guest by behalf of the guest settings in the vsphere client. Following up then, in part #2 (Extending lvm mounts in oracle linux on vmware, part 2: a larger disk partition), an existing disk has been resized, that is extended, touching the underlying disk file in vsphere client. Both scenarios are quite common, where the first one is at all means to be preferred over the second one, because it will not trigger any downtime for the guest os or the guest apps (by a lvm deactivate) running io on the lvm mount and is by far easier to handle. Btw, saying lvm mount does actually mean a dedicated logical volume (on volume groups and physical volumes, you know) mounted to some spot in the directory tree.

Ok then, in short, the first scenario requires the following steps:

  • introduce the new disk to the guest os
  • create a partition on the new disk
  • integrate the new disk into the lvm mount
  • extend the filesystem managed by lvm mount

I’ll give the necessary commands below but will also provide information for verfication purposes. These code boxes will (shall) be closed on page load and will feature an explicit title, indicating an optional step. In this example, I add a second disk to an existing lvm mount of one disk /dev/sdc around 120gb. The new disk is the fourth disk attached to the guest, /dev/sdd perspectively, and has only 16gb for testing.

(more…)