Posts Tagged ‘kvm’

From Dedicated Server to KVM Virtualized Instance

Sunday, June 23rd, 2013

Recently we’ve had a few clients that have wanted to downsize from a dedicated server to a Private Virtual Server running a virtualized instance. In the past, that has been a very time consuming process. While it would probably be better to upgrade these clients to 64bit and take advantage of a fresh OS load, sometimes there are issues that preclude this. One potential problem is not having a recent enough kernel to work with KVM if you are using a kernel that was built specifically for your setup.

What follows is a general recipe for migrating these machines. Most of the tasks are ‘hurry up and wait’ and the actual work involved with the move is typically waiting for data to be moved. This guide starts from bare metal to migrating the first machine.

On the KVM box

  • install Linux
  • install kvm, virt-tools, and any base utilities (our minimal install includes libvirt-bin qemu-kvm virtinst rsync bridge-utils rsync cgroup-bin nvi)
  • Download the .iso for your initial build
  • Determine disk size for each instance
  • Install base image to COW file or LVM partition. From this, you’ll clone it to your new instances. Make sure the COW file is created with the same size as the resulting image, or, you’ll need to resize via LVM and your underlying filesystem.
  • pvcreate /dev/sda2 (or /dev/md1 if you use software rather than hardware raid)
  • vgcreate -s 16M vg0 /dev/sda2
  • lvcreate -L 80G -n c1 vg0
  • virt-clone –original base_image –name newvirtualmachine –file=/dev/vg0/c1

On the machine being moved

  • Install grub2 (if not already running grub2)
  • Edit /etc/default/grub and disable UUID
  • upgrade-grub
  • Install new kernel
  • Paths changing? /etc/fstab should be modified or needs to be done via vnc. If done via VNC, note that the machine may come up in singleuser mode as it will fail the fsck on devices that may not be present.
  • rsync (logs needed?)

Ready for the switch

On dedicated server

  • secondary network helpful, ipv6 on primary interface works
  • ifconfig primary interface to temporary IP, add default route
  • restart firewall (if pinned to primary ethernet)
  • log out, log back in using temporary IP
  • remove ipv6

On KVM Machine

  • virsh start newinstancename
  • connect via vnc
  • clear arp on your routers
  • dpkg-reconfigure grub-pc (sometimes, grub is not recognized on QEMU hard drive)
  • verify swap (double check /etc/fstab)

after grub is reinstalled, reboot just to ensure machine comes up with no issues

Some Tools

KVM Shell Scripts used when we migrated a number of machines.

Kernel Config Notes for KVM

If you are building your own kernels, here are some notes.

Make sure the following are installed in your KVM kernel

  • High Res Timers (required for most of the virtualization options)
  • CPU Task/Time accounting (if desired)
  • Bridge
  • VIRTIO_NET
  • CONFIG_VIRTIO_BLK
  • SCSI_VIRTIO
  • VIRTIO_CONSOLE
  • CONFIG_VIRTIO
  • VIRTIO_PCI
  • VIRTIO_BALLOON
  • VIRTIO_MMIO
  • VIRTIO_MMIO_CMDLINE_DEVICES

Make sure the following are installed in your guest kernel

  • sym53c8xx
  • virtio-pci
    PIIX_IDE

Remember that your guest kernel is running on the underlying hardware of your KVM machine. The guest kernel should have the CPU type set based on the KVM’s CPU type to take advantage of any hardware optimizations the CPU may have.

KVM guest extremely slow, Bug in Host Linux 3.2.2 kernel

Friday, March 22nd, 2013

Client upgraded a KVM instance today, rebooted it and the machine is extremely slow.

The instance is a Debian system and running 3.1.0-1-amd64 which appears to have a bug with time. This causes the machine to respond to packets very sporadically which doesn’t allow anything to be done without a lot of delay. To make matters worse, he’s using a filesystem that is not supported on the host so we can’t just mount the LVM partition and put an older kernel on the machine.

Transferring the 22mb kernel stops at 55%-66%, using rsync –partial results in timeouts and never gets the file transferred. So, we’re stuck with trying to move files around.

Enter the split command

split -b 1m linux-image-3.2.0-2-amd64_3.2.17-1_amd64.deb

which results in a bunch of files named xaa through xaw. Now we can transfer these 1mb at a time which takes quite a bit of time, but, we get them moved over.

cat xa* > linux-image-3.2.0-2-amd64_3.2.17-1_amd64.deb
md5sum linux-image-3.2.0-2-amd64_3.2.17-1_amd64.deb

After verifying the checksum is correct:

dpkg -i linux-image-3.2.0-2-amd64_3.2.17-1_amd64.deb
reboot

However, this didn’t seem to fix the issue. Even creating a fresh installation doesn’t allow the network to work properly, but, I was able to mount the partition in another VM that was ext3 so I could copy over the ext4 filesystem and be able to mount it. For now, I need to probably pull the other VMs off that machine and get down to the root of the issue as I suspect rebooting either will result in the same problem.

Networking on the bare metal works fine. Networking on each of the still running VMs is working, but, on the VM I restarted and the one I just created, networking is not working properly, and, both are using the same scripts that had been used before.

As it turns out, the kernel issue is related to the host. A new kernel was compiled, instances moved off and the host was rebooted into the new kernel. Everything appears to be working fine and the machine came right up on reboot. I’m not 100% happy with the kernel config, but, things are working. Amazing that the bug hadn’t been hit in 480 days that the host was up, but, now that it was identified and fixed, I was also able to apply a few tweaks which should speed things up a bit with some of the enhanced virtio drivers.

Make sure your KVM host machine has the loop device and every filesystem you expect a client might mount. While we did have backups that were seven days old, there was still some data worth retrieving.

Data Center Hardware Upgrades

Wednesday, July 1st, 2009

Many Hosting companies operate on razor thin margins trying to capture as much market share as possible. Over the long haul, many $99/month dedicated servers can be absorbed into your existing bandwidth commitments without any incremental cost. Early on, one dedicated hosting provider dumped servers on the market for $99 with 700gb/transfer per month. At the time, they were undercutting hosting providers and it was deemed impossible that they could be able to fulfill the hosting world’s needs. In reality, they knew that their average client used 2.5gb of transfer per month, so, what difference did it make if they handed their average client 700gb. By having an ‘enormous’ cap, the average consumer wouldn’t be scared about overage charges, but, there were companies that knew they would exceed that cap and the penalty rate structure forced them to go elsewhere. That hosting provider cherrypicked the clients that would make the most money, even though they were a budget provider.

Later, they offered upgrades to the hardware and bandwidth commitments leaving many of those initial customers stuck on older hardware. There was no upgrade path to get from one machine to another except for the client moving the data themselves. The hosting company was only responsible for making sure the machine had power and network. However, there needs to be an upgrade path and there needs to be enough margin in the equation to facilitate hardware and network upgrades over time.

At some point the useful life of a machine is exceeded and one is faced with upgrading the machine, or, replacing components if the machine fails. Typically, CPU fans and hard drives will fail since they are moving parts. Other times, the client installs applications that require more CPU horsepower or runs into a situation where a machine needs more RAM. Depending on the age of the machine, those upgrade costs might exceed installing a new chassis.

With today’s hardware replacing yesterday’s hardware, often times there is quite a disparity between the computing power of the existing machine and the replacement. Virtualization can allow you to put in a powerful machine and replace multiple older machines, sometimes at a much lower TCO than maintaining the older machines.

That conversion isn’t without its issues though. If you are measuring bandwidth, you can no longer use the SNMP statistics from your switch, you must use something to count the flows. Device naming becomes an issue because you need to identify the virtual machine and the physical chassis that the machine is on in case there is a hardware issue. Clients don’t always understand virtualization and want a ‘dedicated’ server, even though their CPU core can be pinned to their exclusive use. If they need extra capacity, and it is available on the chassis, they can utilize it. As a result, Virtualization of a data center can significantly decrease power consumption. An older Pentium 4/3.0ghz CPU can easily reside on a single core of a 2.4ghz Xeon with room to spare. Considering the older infrastructure, you could easily fit 8 Pentium 4/3.0ghz machines with 2GB ram on a single dual CPU Quadcore Xeon with 16gb RAM. An 8:1 consolidation considering the lower utilization machines can result in considerable density increases. Replacing those eight machines might result in using roughly one sixth the power of the previous eight, so, you can still increase the cores per rack which can increase profitabilty. Provided with a mixed infrastructure where you might be replacing single and dual core machines, again, you might lose some of the economies of scale, but, the consolidation will still ultimately increase core density.

Virtualization techniques include using Xen, Citrix, KVM, Virtuozzo and VMWare.

Intel has an interesting blog post about Optimizing Costs within the Data Center that talks about a 10:1 reduction in hardware replacing singlecore machines with virtualized instances.

In addition to the cost and power savings, they saw a processor savings as well. If you’re selling dedicated servers, it might be difficult to give someone less than a whole processor if they had been sold a single processor, but, in a corporate environment, as long as the machine has enough CPU horsepower to do its job, more than one virtual machine can be assigned per core. For example, you can install ten Virtual Machines on an eight core machine and probably still have excess CPU.

However, applications are taking more CPU time than they used to, so, even if you are able to keep a 4:1 ratio, you’re still ahead of the game.

Entries (RSS) and Comments (RSS).
Cluster host: li