linux

Some irritation due to extended output of systemd / systemctl status for sysv services


Bad is an unpleasant word, right? “On” or “off” implies some final statement, or “failed” may signal something wrong but at least terminated. But “bad”… uuuhh, blameworthy, guilty, unaccountable, still being around. Ok, before diving into linguistic depression, the change eventually turned out simple and was actually in good faith but, however, produced remarkable irritation. You know, systemctl status {service} will show an overview of some systemd unit definition with load state, current activity and so on. The load state, in particular, details, in parantheses, into the path of the unit file, the enablement and the vendor enablement preset, respectively. Original systemd units may give a load state as follows:

Loaded: loaded (/usr/lib/systemd/system/atop.service; enabled; vendor preset: disabled)

However, systemd units, that have just been derived from systemv init scripts, only printed the init script path since lately:

Loaded: loaded (/etc/rc.d/init.d/sysv-thing)

The new irritating factor now is an extension for those derived systemv init scripts, to also state the enablement, but show up as “bad” for the running enablement for whatever weird reason:

Loaded: loaded (/etc/rc.d/init.d/sysv-thing; bad; vendor preset: disabled)

(more…)

Scheduling / descheduling linux host reboots via shutdown


Scheduling and/or descheduling linux host reboots is possible with the shutdown -r command using the time parameter (the reboot command, that I usually prefer in favour of clarity, does’nt feature the time parameter, so shutdown -r is the only choice here). Aside from discussing the quite straightforward man page of shutdown, there is two points here to register in your knowledge cells.
First, a (scheduled) shutdown -r hh24:mi execution will hangup itself into background, no need to use job-tools or an &. shutdown -r hh24:mi actually puts systemd-shutdownd in charge of serving the party, this is what you”ll want to expect to see in your running process list, looking for some command effect. Also, a running scheduled shutdown may be cancelled using shutdown -c any time before hh24:mi. Note however, that from around five minutes before hh24:mi, you’ll you’ll no longer be allowed to login the machine, essentially impeding any further control from your side.

(more…)

Splitting a btrfs 4.1x root partition with debian live system gparted btrfs tools 3.17-x


This is a short picture log of doing a btrfs 4.1x root partition split on a (down) oracle linux 7.2 using a debian live system applying gparted based on btrfs tools 3.17-x. Lot’s of names and version codecs, right? But this is what matters. The important message is : it works using this flavours.
Actually, running oracle or redhat linux as the live system may have been much more appropriate concerning compatibility reasons. The odd things is, no redhat-based (enterprise) linux system features gparted. Only fedora does, sourcing the epel-repository but not having kinf of a live system release as debian.

(more…)

Log-dedicated loop device throughput and time overhead on btrfs 4.x


This is again about real world numbers (which I like so much for being authentic ;-). The context being the throughput and time overhead of a loop device, as a poor or late man’s replacement for a real disk partition, jep I know, on copy-on-write filesystem btrfs, exclusively dedicated for logging, that is appending over and over. Why? The loop device may overflow with data without affecting the underlying filesystem, see Btrfs subvolume quota still in its infancy with btrfs version 4.2.2 for more why’s and what I tried to get btrfs subvolume with quota to work. By the way, about that though, see debian org Btrfs for a down-to-earth assessment of btrfs to date, even uttering a recommendation from what version number (4.4) to start off at the earliest near production. Anyway, what follows, adapts a test setup as in Performance of loopback filesystems, prime credits go there, and expands the layout somewhat for the btrfs C or nodatacow flag. Here we go.

Have this baseline, if you like, test on the raw iron. Well, its not raw iron really, its vmware and tons of storage below, and the shown performance is terrible I know and I only take one testset of bs / count but that won’t matter. There’s something to start off.

mkdir /tmp/loop0
dd if=/dev/zero bs=1M of=/tmp/loop0/file oflag=sync count=1000
1048576000 bytes (1.0 GB) copied, 8.9605 s, 117 MB/s
1048576000 bytes (1.0 GB) copied, 6.52867 s, 161 MB/s
1048576000 bytes (1.0 GB) copied, 5.35716 s, 196 MB/s
1048576000 bytes (1.0 GB) copied, 5.48745 s, 191 MB/s
1048576000 bytes (1.0 GB) copied, 5.14736 s, 204 MB/s

(more…)

Btrfs subvolume quota still in its infancy with btrfs version 4.2.2


Ever tried to dive into the btrfs subvolume topic, especially in combination with quotas (not snapshots here)? Looks really promising… administring subvolumes in hierarchies automagically offers managing quotas on (summed up) top- and on (dedicated) sub-levels by design, see : Btrfs SysadminGuide Subvolumes or Btrfs: Subvolumes and snapshots for example. With the later 4.x kernels there is btrfs 4.2.2, representing a huge step forward in btrfs development, so thought to give it another try on a red hat / oracle uek based 7.2 system.

Following, I’m going to show what I attempted to achive, the how-to’s, the workarounds I tried and, intermixed, the quite odd behaviour that I observed. Odd to an magnitude, that makes me recommed everyone to stay averse against employing this promising but still semifinished (?) technology.

The setup of a subvolume, dedicated for quota control is quite easy and takes only a couple of keystrokes. Understand though, that quota control with btrfs can only be enabled throughout the entire filesystem but can then be set to a value dedicatetly.

(more…)