Extending lvm mounts in oracle linux on vmware, part 1: a new disk device

This is part #1 of a two part series about scenarios of extending lvm (logical volume manager 2) mounts in an oel (red hat 7) guest running on vmare. Starting up, the scenario in question is a new disk being added to the guest by behalf of the guest settings in the vsphere client. Following up then, in part #2 (Extending lvm mounts in oracle linux on vmware, part 2: a larger disk partition), an existing disk has been resized, that is extended, touching the underlying disk file in vsphere client. Both scenarios are quite common, where the first one is at all means to be preferred over the second one, because it will not trigger any downtime for the guest os or the guest apps (by a lvm deactivate) running io on the lvm mount and is by far easier to handle. Btw, saying lvm mount does actually mean a dedicated logical volume (on volume groups and physical volumes, you know) mounted to some spot in the directory tree.

Ok then, in short, the first scenario requires the following steps:

  • introduce the new disk to the guest os
  • create a partition on the new disk
  • integrate the new disk into the lvm mount
  • extend the filesystem managed by lvm mount

I’ll give the necessary commands below but will also provide information for verfication purposes. These code boxes will (shall) be closed on page load and will feature an explicit title, indicating an optional step. In this example, I add a second disk to an existing lvm mount of one disk /dev/sdc around 120gb. The new disk is the fourth disk attached to the guest, /dev/sdd perspectively, and has only 16gb for testing.

[~]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 00 Lun: 00
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
  Type:   Direct-Access                    ANSI  SCSI revision: 02
Host: scsi2 Channel: 00 Id: 01 Lun: 00
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
  Type:   Direct-Access                    ANSI  SCSI revision: 02
Host: scsi2 Channel: 00 Id: 02 Lun: 00
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
  Type:   Direct-Access                    ANSI  SCSI revision: 02

Introducing the new disk to the guest os takes determining the scsi host enumeration, where the attached disks are running under /sys/class/scsi_host and locating plus executing a (re)scan of all attached disks and assigned parameters. Note that the scsi host enumeration may change from guest bounce to bounce such that this information has to be investigated anew every time. This is different from the bus (or channel), target (or id) and lun parameters, which will stay constant over time and correlate to the disk information in the vshere client (e.g. a scsi (0:3) in vsphere will show up as ...Channel: 00 Id: 03... in /proc/scsi/scsi).

# find the scsi host used for disk connections (#2 here)
[~]$ grep mpt /sys/class/scsi_host/*/proc_name
/sys/class/scsi_host/host2/proc_name:mptspi

# find the scan command specific for the scsi host enum
[~]$ find /sys -path "*/host2/*/scan"
/sys/devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2/scan

# execute the scan at scsi host2, just scan everything
[~]$ echo "- - -" > /sys/devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2/scan
[~]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 03 Lun: 00
  Vendor: VMware   Model: Virtual disk     Rev: 1.0
  Type:   Direct-Access                    ANSI  SCSI revision: 02

[~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdd               8:48   0   16G  0 disk

Having the new disk around takes creating a partition, here with all standard parameters up to a maximum size and a type of linux lvm (8e), in pseudocode.

# create a partition on the new disk
[~]$ fdisk /dev/sdd
... n : new partition (all defaults "p", "1", "2048", "<max>")
    t : 8e (linux lvm)
    p : print for review
    w : write to disk
    q : quit
# recheck with 'fdisk'
[~]$ fdisk -l /dev/sdd
Disk /dev/sdd: 17.2 GB, 17179869184 bytes, 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    33554431    16776192   8e  Linux LVM

# recheck with 'lvmdiskscan' - disk is not yet formatted for lvm (excerpt)
[~]$ lvmdiskscan
  /dev/volg1/logv1 [     120.00 GiB]
  /dev/sdc1        [     120.00 GiB] LVM physical volume
  /dev/sdd1        [      16.00 GiB]
  5 partitions
  1 LVM physical volumes

An integration of the new disk into the lvm mount starts at the physical volume, i.e. writing the lvm metadata to the partition given.

# write metadata for lvm
[~]$ pvcreate /dev/sdd1
  Physical volume "/dev/sdd1" successfully created
[~]$ lvmdiskscan
  /dev/volg1/logv1 [     120.00 GiB]
  /dev/sdc1        [     120.00 GiB] LVM physical volume
  /dev/sdd1        [      16.00 GiB] LVM physical volume
  4 partitions
  2 LVM physical volumes

Next at the lvm level is to update (extend) the volume group and the logical volume of the lvm mount. The extension of the logical volume aims at gaining the maximum available size with a percent syntax.

# volume group
[~]$ vgextend volg1 /dev/sdd1
  Volume group "volg1" successfully extended

# logical volume
[~]$ lvresize -l +100%FREE /dev/mapper/volg1-logv1
  Size of logical volume volg1/logv1 changed from 120.00 GiB (30719 extents) to 135.99 GiB (34814 extents).
[~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdd               8:48   0   16G  0 disk
└─sdd1            8:49   0   16G  0 part
  └─volg1-logv1 251:0    0  136G  0 lvm  /usr/local/hugo_srch/solr-mnt

Finally, the filesystem has to be extended as well, which still can be accomplished online for both btrfs (being online is even mandatory) and etx4.

# btrfs
[~]$ btrfs filesystem resize max /mnt
Resize '/mnt' of 'max'

# etx4
[~]$ resize2fs /dev/mapper/volg1-logv1
Filesystem at /dev/mapper/volg1-logv1 is mounted on /mnt; on-line resizing required
The filesystem on /dev/mapper/volg1-logv1 is now 35649536 blocks long.
[~]$ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/volg1-logv1  134G   60M  128G   1% /mnt

My references:

Have fun, Peter

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.