To begin with a bit of background on the environment may be helpful…
The need to virtualize my HP C7000 blade environment came from a requirement to consolidate our comms room estate, and retire out legacy hardware and achieve as good an occupancy on the remaining hardware as possible. The eventual plan for the left-over kit could be anything from a test-rig running Eucalyptus, or just a VMware ESXi environment running many virtual machines. – For now we are keeping it simple with a basic ESXi environment.
Most of my existing hardware is running on G1 or G2 blade kit, and I wanted to be able to just lift out the existing servers and place them in their new environment with as little disruption as possible, or developer time rewriting legacy code,etc, whilst I gave some thought to how I would rearrange my estate for maximum efficiency once all the services running in it had been virtualized, and made effectively hardware independent (within reason).
Here are the steps I went through, I’ve also listed a couple of gotcha’s that I wasted a bit of time on, but I’m glad I’ve thought of, so I wont be wasting time again!
I wanted to virtualize a system that was running on a HP BL460c(using its local storage not SAN or storage blades) and make it run under ESXi. I thought that this would be a simple case of connecting the ESXi cold clone CD to the blade and doing a few mouse clicks.
This was how I proceeded, but I couldnt figure out initially why the blade was unable to see my ESXi server, even though all the correct routing between networks existed. – Then I remembered that I was running with 2 x Gb2EC network switches in the back of that c7000 chassis, and that I had had to use VLAN-tagging on all of the ports, this worked fine when the original blade OS was ‘up’, but without the knowledge of the VLAN tags in the cold clone CD, this seemed to fail to work.
(If someone has done a cold clone in an environment where they have needed to tag the packets that are being sent from the cold clone mini-OS then I would love to have some feedback on how you did it.)
In the end I moved the blade from its original chassis and placed it in a c7000 enclosure with the VLAN-tagging disabled, and this worked great.
So I used the blade ‘SUV’ cable and connected a CD drive and keyboard and VGA screen to the blade and booted from the VMware ESXi cold clone CD, and went through the steps of identifying the ESXi system that I wanted to receive the image that the cold clone CD produced from the blade.
I had a bit of a issue with the fact that parts of the configuration process for the cold clone environment seemed to require a mouse to click ‘Next’ as the tab key seemed to work intermittently (this could be a hardware/keyboard issue on my side), but just for reference its fine to disconnect the keyboard from the SUV cable and connect a mouse (and vice-versa) as many times as necessary throughout the installation. – Another approach which is probably possible is to connect the cold clone media using HP Virtual Media, but again I went for what was the most straightforward approach at the time.
Once the cloning process was complete I had the virtual version of the blade available on my ESXi host, but at this point it would still not boot successfully, as its expecting to see the Smart Array adapter in the blade, and so it tries to look for boot and root on /dev/cciss/c0d0pXX.
So from this point forward the files that I needed to edit on the Virtual machine image were the /etc/fstab, the /boot/grub/device.map and /boot/grub/menu.lst.
You need to go through this and replace any reference to /dev/cciss/c0d0 with /dev/sdaX and so on. As an example here are some of my changes, which I applied by booting a liveCD and mounting each partition:
[/boot/grub/device.map]
(hd0) /dev/cciss/c0d0 —-> changes to —>>(hd0) /dev/sda (note that there is no partition number specified)
[/boot/grub/menu.lst]
root (hd0,0)
kernel /vmlinuz-version root=/dev/cciss/c0d0p3 resume=/dev/cciss/c0d0p2
initrd /initrd-version
The above three lines changed to:
root(hd0,0)
kernel /vmlinuz-version root=/dev/sda3 resume=/dev/sda2
initrd /initrd-version
[/etc/fstab]
/dev/cciss/c0d0p3 / –>changes to –> /dev/sda3 /
/dev/cciss/c0d0p1 /boot –>changes to –> /dev/sda1 /boot
/dev/cciss/c0d0p2 swap –>changes to –> /dev/sda2 swap
Next, I grabbed the SLES install CD/DVD and booted as if I were going to do an installation. I proceeded through the normal install steps up to where you are asked whether you are doing a new install,an update or ‘other options’. From other options you can run the System Repair Tool, and this analyses the installed system and advises you of any missing kernel modules, or ones that are now defunct (amongst other things). My CD advised me to disable debugfs and usbfs. I did not select verify packages, but only ‘check partitions’, ‘fstab enties’ and the final step rewriting the boot loader if needed.
Once the newly imaged server had booted I needed to delete the old network interfaces, and delete all entries in the /etc/udev/rules.d/30-persistent-net-names.rule, do a reboot, which automatically entered the new MAC address details for the new VMware ethernet adapter, then readded the network adapter in YaST.
After that I did a reboot, ejected the Install CD, installed VMwareTools on the Guest and I had my newly virtualized system operational again!
Matt Palmer 30-Aug-2011