Extending lvm mounts in oracle linux on vmware, part 2: a larger disk partition


This is part #2 of a two part series about scenarios of extending lvm (logical volume manager 2) mounts in an oel (red hat 7) guest running on vmare. Part #1 (Extending lvm mounts in oracle linux on vmware, part 1: a new disk device) discussed a scenario of a new disk being added to the guest by behalf of the guest settings in the vsphere client. This part, however, follows up with the case that an existing disk has been resized, that is extended, touching the underlying disk file in vsphere client. I already mentioned that the disk file extension case will be much more costly both in tackling and downtime, so try to avoid it, go and ask your admins to always consent to adding new disks (or even have a smarter storage approach).

However, whatever comes around… Though, the point or question finally is: what downtime, aside from a lot more typing, will this scenario take. The (relatively) good news is that only the depicted lvm mount (see part #1 for an explanation) will need a short offline such as any apps, accessing the lvm mount, will need to be shortly offlined too. No guest bounce or any 3rd apps downtime necessary.

Again, a sum up of required step looks like this.

  • introduce the new disk geometry to the guest os
  • extend the partition on the existing disk
  • offline affected apps / the lvm mount
  • notify the kernel abount the partition change
  • online affected apps / the lvm mount again
  • integrate the new disk into the lvm mount
  • extend the filesystem managed by lvm mount

Yet, again, this post will also attempt to gain as much understanding as possible about what’s going on under the covers and therefore supplies a lot of information for verfication purposes. These code boxes will (shall) be closed on page load and will feature an explicit title, indicating an optional step. In this example, an existing disk /dev/sdc will be extended by just 10gb for testing.

[~]$ df -h | grep logv1
/dev/mapper/volg1-logv1   80G   48G   31G  61% /mnt

[~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdc               8:32   0   80G  0 disk
└─sdc1            8:33   0   80G  0 part
  └─volg1-logv1 252:0    0   80G  0 lvm  /mnt

[~]$ pvdisplay | grep "PV Size"
  PV Size               80.00 GiB / not usable 2.97 MiB
[~]$ vgdisplay | grep "VG Size"
  VG Size               80.00 GiB
[~]$ lvdisplay | grep "LV Size"
  LV Size                80.00 GiB

[~]$ fdisk -l /dev/sdc
Disk /dev/sdc: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              63   167772159    83886048+  8e  Linux LVM

[~]$ cat /proc/partitions
major minor  #blocks  name
   8       32   83886080 sdc
   8       33   83886048 sdc1

Comparing with part #1, again get the current scsi host enumeration for the living guest and remember to deduce the ...Channel: xy Id: xy... pair from the settings dialog in vsphere client. Divergent this time, however, locate and execute the rescan command specific for this very disk to announce the changes in disk geometry.

# find the scsi host used for disk connections (2 here)
[~]$ grep mpt /sys/class/scsi_host/*/proc_name
/sys/class/scsi_host/host2/proc_name:mptspi

# find the rescan command specific for the disk in question residing on :
#   Host : 2, Bus (or Channel) : 0, Target (or Id) : 2, Lun : 0
# excerpt for the third disk at 2:0:2:0
[~]$ find /sys -path "*/host2/*/rescan"
/sys/devices/pci0000:00/0000:00:10.0/host2/target2:0:2/2:0:2:0/rescan

# execute the rescan at the third disk at 2:0:2:0
[~]$ echo "- - -" > /sys/devices/pci0000:00/0000:00:10.0/host2/target2:0:2/2:0:2:0/rescan
[~]$ tail -n 25 /var/log/messages
  Jun 14 14:14:52 xyz kernel: sd 2:0:2:0: [sdc] 188743680 512-byte logical blocks: (96.6 GB/90.0 GiB)
  Jun 14 14:14:52 xyz kernel: sd 2:0:2:0: [sdc] Cache data unavailable
  Jun 14 14:14:52 xyz kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
  Jun 14 14:14:52 xyz kernel: sdc: detected capacity change from 85899345920 to 96636764160

The partition resize or extend, which follows next, actual uses a hack (imho) by dropping the partition and (re)creating the partition with exactly the same parameters as before such that the data will not be modified and will remain unchanged (and usable after the resize). One may employ parted, my choice, or fdisk, however:

  • the parted resize operation is not available with rh7 and so far not with oel7 for whatever reason
  • fdisk will always start new partions at sector 2048 but we need to start at sector 63 for investingation revealed that the original partition has been created with cfdisk (for whatever reason)

Lvm mount downtime start

That is, obviously, the partition resize must use cfdisk to keep the start sector at 63, which is absolutely mandatory.

# ---
# stop any apps accessing the lvm mount now
# ---

[~]$ umount /mnt
[~]$ cfdisk /dev/sdc
... (foot menue) delete : delete the (only primary) partition, leave the free space partition
    (foot menue) new : (re)create the partition with exactly the same parameters as above
      (primary, maximum size as proposed)
    (foot menue) type : set the partition type to 8e aka linux lvm
    (foot menue) print : verify the start sector and the type (id here)
    (foot menue) write : write the new partition table to disk
    (foot menue) quit

Now notify the kernel about the change to the partition. This takes partprobe in companion with deactivating the (only) logical volume group in the lvm mount which holds a lock to be released before writing to /proc/partitions. Otherwise partprobe will error out like this:

Error: Partition(s) 1 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.

Since reboot is not an option to take, have a call to vgchange first.

[~]$ vgchange -a n volg1
  0 logical volume(s) in volume group "volg1" now active

[~]$ partprobe /dev/sdc

# also automatically (re)mounts the volume (!!)
[~]$ vgchange -a y volg1
  1 logical volume(s) in volume group "volg1" now active

# ---
# restart any apps accessing the lvm mount
# ---

Lvm mount downtime end

What follows now is the resize at the lvm as well as at the filesystem level, pvresize, though operating on a physical item, already is an online operation.

# online resize the lvm physical volume
[~]$ pvresize -v /dev/sdc1
    Resizing volume "/dev/sdc1" to 188743617 sectors.
    Resizing physical volume /dev/sdc1 from 0 to 23039 extents.
  Physical volume "/dev/sdc1" changed
[~]$ pvdisplay | grep "PV Size"
  PV Size               90.00 GiB / not usable 2.97 MiB
# reflected in the volume group as well
[~]$ vgdisplay | grep "VG Size"
  VG Size               90.00 GiB

# online resize the lvm logical volume
[~]$ lvresize -l +100%FREE /dev/mapper/volg1-logv1
  Size of logical volume volg1/logv1 changed from 80.00 GiB (20479 extents) to 90.00 GiB (23039 extents).
[~]$ lvdisplay | grep "LV Size"
  LV Size                90.00 GiB

Finally, the filesystem has to be extended as well, which still can be accomplished online for both btrfs (being online is even mandatory) and etx4.

# btrfs
[~]$ btrfs filesystem resize max /mnt
Resize '/mnt' of 'max'

# ext4
[~]$ resize2fs /dev/mapper/volg1-logv1
Filesystem at /dev/mapper/volg1-logv1 is mounted on /mnt; on-line resizing required
The filesystem on /dev/mapper/volg1-logv1 is now 94371808 blocks long.
# recheck with 'df -h' - the part type size now has changed
[~]$ df -h | grep solr
/dev/mapper/volg1-logv1        90G   48G   41G  54% /mnt

Have fun, Peter

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s