This is an old revision of the document!
Thoughts on how to create virtual compute nodes in the HPCC stack. Specifically, trying to solve the need for tiny, but many, compute nodes for the nano physic applications. Like virtual compute nodes with a single core CPU with 100 MB or less of memory running a lean Scientific Linux operating system. Here is a good introduction: http://www.slideshare.net/gpaterno1/comparing-iaas-vmware-vs-openstack-vs-googles-ganeti-28016375
I'll select Ganeti to start with as it appears the simplest to setup. I've no need for services like fail-over or migration. Just the ability to rapidly many tiny nodes from a template. It also appears that Ganeti clusters can be embedded with Openstack later.
On hold, these Xen tools are very nice, no need for Ganeti up front (virt-manager, virt-clone, virsh).
yum install -y --nogpgcheck xen kernel-xen \ virt-manager libvirt libvirt-python python-virtinst chkconfig xend on # disable selinux /etc/selinux/config # add to xen kernel grub line soe stuff title CentOS (2.6.18-371.6.1.el5xen) root (hd0,0) kernel /xen.gz-2.6.18-371.6.1.el5 dom0_mem=2048M,max:2048M dom0_max_vcpus=1 dom0_vcpus_pin allow_unsafe loglvl=all guestloglvl=all module /vmlinuz-2.6.18-371.6.1.el5xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet module /initrd-2.6.18-371.6.1.el5xen.img # hostname fully qualified (in case we go to Ganeti later) # edit some settings in /etc/xen/xend-config, consul links below # turn off some services with chkconfig # cups, iptables, ip6tables, autofs, crond, sysstat, bluetooth # firstboot, ipmi, iscsi, iscsid reboot
Then launch virt-manager
to look at your Dom0.
Build a base clone and install what you need (like the Lava scheduler files). Then shut it down.
Now you build a script to clone and post prep new clones build of it.
#/bin/bash # automate this virt-clone --original vmdemo --name bvm7 --file /var/lib/xen/images/bvm7.img # wait 3 mins sleep 180 virsh start bvm7 # wait 2 mins sleep 120 # clobber vmdemo rc.local that sets up a static ip that you know rather than dhcp for automation cd /root scp rc.local vmdemo:/etc ssh vmdemo "cat /etc/sysconfig/network | sed s/vmdemo/bvm7/g > /tmp/network" ssh vmdemo "scp /tmp/network /etc/sysconfig/" ssh vmdemo "cat /etc/sysconfig/network-scripts/ifcfg-eth0 | sed s/192.168.150.0/192.168.150.7/g > /tmp/ifcfg-eth0" ssh vmdemo "scp /tmp/ifcfg-eth0 /etc/sysconfig/network-scripts/" ssh vmdemo reboot # wait 2 mins sleep 120
3d Lennard-Jones melt: for 10,000 steps with 32,000 atoms | ||
---|---|---|
Queue, node, HT | Jobs/Node, Loop Time | Comment |
hp12, n15, no-HT | 01 jobs, 481 | |
hp12, n15, no-HT | 07 jobs, 482 | |
hp12, n2, yes-HT | 01 jobs, 470 | |
hp12, n2, yes-HT | 16 jobs, 804 | known penalty |
bss24, many, no-HT | 01 jobs, 844 | equivalent to hp12, yes-HT |
bss24vm, bvm1, VM | 01 jobs, 776 | 1 vcpu, 100 ram |
bss24vm, bvm1, VM | 02 jobs, 850 | 2 vcpus, 100 ram |
bss24vm, bvm1, VM | 04 jobs, 1735 | 4 vcpus, 512 ram |
bss24vm, bvm2, VM | 08 jobs, 3582 | 8 vcpus, 1024 ram |
bss24vm, bvm1, VM | 32 jobs, XXXX | 32 vcpus, 1024 ram |
bss24vm, bvm1-4, VM | 4x01jobs, 1818 | 4x1vcpu, 4×128 ram |
bss24vm, bvm1-8, VM | 8x01jobs, 3745 | 8x1vcpu, 8×128 ram |
optimal physical to virtual cpu ratio for best performance according to Xen | ||
bss24, many, no HT | 02 jobs, 826 | equivalent to hp12, yes-HT |
bss24vm, bvm2-3, VM | 2x02jobs, 1708 | 2x2vcpus, 2×128 ram, |
bss24vm, bvm2-5, VM | 4x02jobs, 3497 | 4x2vcpus, 4×128 ram, optimal physical to virtual cpus ratio |
bss24vm, bvm1-8, VM | 8x02jobs, XXXX | 8x2vcpus, 8×128 ram, optimal physical to virtual cpus ratio |
this is odd | ||
bss24vm, bvm1, VM | 03 jobs, 1273 | 3 vcpus, 256 ram, make it 128 |
bss24vm, bvm1, VM | 05 jobs, 2108 | 5 vcpus, 256 ram |
bss24vm, bvm1, VM | 31 jobs, XXXX | 31 vcpus, 21504 ram |