You are here

Adding a RAID under a LVM

Russel Coker blogs about a problem, which concerned me as well: Killing Servers with Virtualisation and Swap, i.e.: what happens when one domU is happily swapping the whole day?

Luckily this didn't happen to me yet, but as Russel do I tend as well to give a small sized swap area to domUs. If they're doing extensive swapping, something is wrong and most probably a task has a memory leaking problem or the virtual machine is undersized with its RAM.

Anyway, Russel had some thoughts and ideas how to deal with the problem, such as giving dedicated swap areas on single disks to some domUs to seperate disk I/O for the filesystem and the I/O for swap. He proceeds with:

Now if you have a server with 8 or 12 disks (both of which seem to be reasonably common capacities of modern 2RU servers) and if you decide that RAID is not required for the swap space of DomUs then it would be possible to assign single disks for swap spaces for groups of virtual machines. So if one client had several virtual machines they could have them share the same single disk for the swap, so a thrashing server would only affect the performance of other VMs from the same client. One possible configuration would be a 12 disk server that has a four disk RAID-5 array for main storage and 8 single disks for swap. 8 CPU cores is common for a modern 2RU server, so it would be possible to lock 8 groups of DomUs so that they share CPUs and swap spaces. Another possibility would be to have four groups of DomUs where each group had a RAID-1 array for swap and two CPU cores.

I would myself consider a different approach:

  • make your RAID as usual across your 8 or 12 disks, but don't use the whole disks, but just a partition on it, leave 10 GB or so free on each disk for later swap purposes.
  • with 10 GB on each disk you'll have 80 GB swap with 8 disks or 120 GB with 12 disks in your 2 RU server.
  • you can setup your swap areas as needed. Maybe some domUs would benefit from more heads when swapping, some need more space or another one should survive when one disk fails. So you can setup each swap differently, be it as a single partition on a single disk or a RAID1 or JBOD for multiple heads.

Of course you can mix up both approaches or even use swap over NFS or NBD/DRBD. There are many ways and possible solutions, but the best way to deal with swapping is to take care that it doesn't happen, of course... ;)

Kategorie: 
 

Comments

I don't understand RAID nor LVM, but why do you create /dev/md3 and use root=/dev/md2?

I mentioned above that moving the host system onto the RAID is rather standardish, that is: you can do it by making a backup, making your RAID, restoring the backup. The rest and main issue of the blog post is about LVM and Xen domUs where it's not that simple to backup/restore the virtual disks.
/dev/md2 is therefor part of the host system, namely it's the rootfs of the dom0. The main focus of this blog post is nevertheless on /dev/md3, though... ;)

Did you create a new initramfs? The recent trend is to not support RAID assembly in the kernel and to have the initramfs do that, so you need to build a new initramfs if you want to use a software RAID for the root filesystem.

Yes, I did create a new initramfs and verified with "-v | grep md" that the modules were included. But still no go on /dev/md2 as rootfs.
On my dedicated server there'is no problem with rootfs on /dev/md2 and booting from it. Maybe this is a etch vs lenny issue? But currently I've no time in examing this further...

Maybe it was a problem with the mdadm.conf file, if it has the wrong serial numbers or some other problem then you end up with an initramfs that can't assemble it.

An option might be to hack a "mdadm --assemble" command into the initramfs.

Thanks! This is exactly the processes I needed when my workstation showed up with one wrong drive and I installed the OS before waiting for a replacement.

Pages

Add new comment

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer