Monday 20 February 2012

How to migrate data to the smaller disks using LVM.

Well, there could be any other subject for this post taken, because I'm going to write about the Physical Extents in LVM, and this could be used for different purposes.

My particular issue was related to the fact that if you want to use snapshots or cloning of a turned-on VM, you must be aware of some overhead for VMDK files. You can find more details in VMware KB 1012384: "Creating a snapshot for a virtual machine fails with the error: File is larger than maximum file size supported".

The problem was that when I try to clone a VM that have a virtual drive with size of 256GB (the maximum size for the specified datastore), I faced the following errors:

Create virtual machine snapshot VIRTUALMACHINE File <unspecified filename> is larger than the maximum size supported by datastore '<unspecified datastore>'
File is larger than the maximum size supported by datastore
 That was a server in production, so I couldn't just turn it off for 30 minutes. At this moment I realized, that I have number of servers with the problem like that and I can face the same problems sooner or later.

Generally you cannot just decrease disk size from the configuration of VM. (Even if there was this possibility, this is fairy not to use. Almost any file system will go crazy with the sudden change like that. In most cases this will lead to the loss of data).

Fortunately, I use LVM for all my servers, mainly for the purpose of extending volumes if needed. This time I had a task to decrease the size of Physical Volumes and make it without a second of downtime.

So, steps was as following:
  • Check your backups
  • Add the virtual hard disks holding in mind the .vmdk sizes as it's specified in VMware KB 1012384.
    Depending on your space assigning policy you can follow the following rules:
    a) add 2 disks of 250GB (in my case) with thin provisioning;
    b) add 1 of 250GB and 1 disk of 10GB. (I use round numbers to simplify the setup)
  • Re-scan scsi bus inside the VM:
# echo "- - -" > /sys/class/scsi_host/host0/scan
  • Create lvm partitions for all the added devices:
# fdisk /dev/sdX
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-32635, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-32635, default 32635):
Using default value 32635

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

  • Extend the Volume Group by adding those new disk

# lvm vgextend VolGroupXX /dev/sdX1 /dev/sdY1
  • Check the number of Physical Extents to migrate

# lvm pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdf1
  VG Name               VolGroupXX
  PV Size               250.00 GiB / not usable 4.69 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              63998
  Free PE               63998
  Allocated PE          0
  PV UUID               Zp7uJR-YsIQ-AjRP-hdGL-OXSl-XbJG-N1GFn2
  --- Physical volume ---
  PV Name               /dev/sdg1
  VG Name               VolGroupXX
  PV Size               250.00 GiB / not usable 4.69 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              63998
  Free PE               63998
  Allocated PE          0
  PV UUID               eKdiqW-eMjI-ck4a-grM3-ogX3-6BOP-Q1ldlC
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               VolGroupXX
  PV Size               255.99 GiB / not usable 2.72 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              65534
  Free PE               0
  Allocated PE          65534
  PV UUID               PajbnY-65II-Aeyt-WamI-Xzr8-M3b2-kWfy6U

In this example /dev/sdf and /dev/sdg are the new added hard drivse with the size of 250GB, and /dev/sdb is an old one with the size of 256GB. The important part for as is "Total PE". If we call the pvmove command without specifying the amount of Physical Extents to move, we will receive an error.
  • Migrate the Extents to the new hard drives
# lvm pvmove /dev/sdb1:1-63998 /dev/sdf1
# lvm pvmove /dev/sdb1 /dev/sdg1
As you noticed I didn't specify the number of extents to move to the second drive. In fact this command will move only the used ones.
Now, you might extend the Logical Volume to use all the space added
  • Extend the Logical Volume
# lvm lvextend +100%FREE VolGroupXX/LogVolXX

  • extend the partition with the proper file system tools
# <resize2fs|resize_reiserfs|...> /dev/mapper/VolGroupXX-LogVolXX
  • And, finally, remove the old drives
# lvm vgreduse VolGroupXX /dev/sdb1
# lvm pvremove /dev/sdb1

Now you can remove the old drive from the VM configuration... zero downtime.

Well, many of the experienced Linux admins might not find anything new in this article. However, I didn't know how it works till I faced a need of a change like that. For instance, I have extended so many volumes so much times, that this operation (extending) takes about a minute for me (if not count the time spent by the FS extending tool).

IBM DS3000/DS4000/DS5000 vCenter Management plugin issue

Not long ago I have found that IBM have a plugin for their storage for the vCenter. Particularly I was interested to manage DS3400 and DS3524 via vSphere client console, so I've got a plugin from this page.

However, during setup of this plugin I've faced couple of issues.

First of all (this is minor actually, but I've spent 20 minutes to find out what is happening) make sure you've set the free TCP ports for Jetty service. In my case there was a conflict with the vCenter Update service. Common issue, nothing special... however...

Second issue is rather VMware specific, since I found a solution on some page related to another plugin for vCenter. So, the problem is that you have the following error when try to select this plugin in vSphere Client:

User is not authorized to use this plug-in.
Please contact the Administrator and ask for StorageAdmin.readwrite or SorageAdmin.readonly Privileges


However, you are for sure in the group with the full privileges... And actually this is the problem. It turns out that your account should have those privileges apart from any group, set on per-user basis.

So, as workaround, just add a separate permission for your account and restart the vSphere Client.