\\
**[[cluster:0|Back]]**
==== LXC Linux Containers ====
Ok, virtualization again. Trying this approach on a Dell PowerEdge 2950.
* https://linuxcontainers.org/
* http://docs.oracle.com/cd/E37670_01/E37355/html/ol_config_os_containers.html
* http://wiki.centos.org/HowTos/LXC-on-CentOS6
Starting with the latter.
When you get the SElinux policy, create the *.te file then
[root@petaltail ~]# vi lxc.te
[root@petaltail ~]# semodule -l | grep lxc
[root@petaltail ~]# checkmodule -M -m -o lxc.mod lxc.te
checkmodule: loading policy configuration from lxc.te
checkmodule: policy configuration loaded
checkmodule: writing binary representation (version 10) to lxc.mod
[root@petaltail ~]# semodule_package -o lxc.pp -m lxc.mod
[root@petaltail ~]# semodule -i lxc.pp
[root@petaltail ~]# semodule -l | grep ^lx
lxc 1.0
Ah, I'm simply going to CentOS-7 because LXC is fully integrated
* [[http://www.theregister.co.uk/2014/07/07/centos_7_gm/]]
Ah, no. Took a look at v7 and it's very different with the over engineering part of system control. Will learn later, back to v6.5. I also want to preserve the ability to run different OS flavors which is not possible under LXC containers.
==== KVM ====
KVM is hardware level virtualization as opposed to my previous meanderings into the world of software level virtualization (Xen [[cluster:127|Virtual HPCC services]]). There is basically a "dom0" domain manager running keeping track of virtual images that run from block devices written onto a block device.
* Followed the primary setup on this page
* http://www.howtoforge.com/virtualization-with-kvm-on-a-centos-6.4-server
* Once ''virt-manager'' worked we followed this doc a bit setting up bridges etc
* http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CDQQFjAB&url=http%3A%2F%2Flinux.dell.com%2Ffiles%2Fwhitepapers%2FKVM_Virtualization_in_RHEL_6_made_easy.pdf&ei=GZfjU9P2KqS-sQS5p4HgCw&usg=AFQjCNFgCfRXV-g4eGE_Fm5AiIw4thnfYg&bvm=bv.72676100,d.cWc&cad=rja
* for future reference, more details
* http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CC4QFjAA&url=http%3A%2F%2Flinux.dell.com%2Ffiles%2Fwhitepapers%2FKVM_Virtualization_in_RHEL_6_Made_Easy_Part2.pdf&ei=GZfjU9P2KqS-sQS5p4HgCw&usg=AFQjCNGzBgZ3f-YvCbV5sav7450BX_Io6Q&bvm=bv.72676100,d.cWc&cad=rja
* But once I got to the cloning part, I found this to be super easy and scritpable
* http://rwmj.wordpress.com/tag/virt-clone/
First I create my v1 clone with virt-install then build my v1 clone the way I wanted it and then customized it manually (/etc/fstab, openlava scheduler, /etc/passwd, /etc/shadow, /etc/group and /etc/hosts, etc). Added a second bridge (''br1'' is public/internet, previously created ''br0'' is for private network for scheduler). That was done via virt-manager GUI (View VM Details of vm and then select Add Hardware). ''br1'' allows me to run yum update before I clone. Power this clone up, add to a test queue, and submit some jobs to make sure it all works. Then clone.
# create v1, my "base" clone: 8 gb hdd, 1 gb ram, centos 6.5, connect to ethernet bridge ''br0''
virt-install --connect=qemu:///system -n v1 -r 1024 --vcpus=1 \
--disk path=/var/lib/libvirt/images/v1.img,size=8 \
-c /var/lib/libvirt/images/CentOS-6.5-x86_64-bin-DVD1.iso \
--vnc --noautoconsole --os-type linux --os-variant rhel6 --accelerate \
--network=bridge:br0 --hvm
Here are the steps for cloning v1 to v5. You'll need ''guestfish'' part of the libguestfs tools allowing you to edit content inside the block level file. Very nifty.
* yum install libguestfs-tools-c
Next we'll create the v5.img block device, dump the v1 config into v5.xml and then edit that file. UUID and Mac Address we'll edit and the last 2 characters we change to '05' (make it unique). Name becomes v5 from v1 and we'll similarly adjust the block device location/filename. Then we define this as domain v5.
Launch ''guestfish'' and make the same edits to Mac Address in files listed below. Then boot the vm.
# cd /var/lib/libvirt/images
# dd if=v1.img of=v5.img bs=1M
# virsh dumpxml v1 > /tmp/v5.xml
# vi /tmp/v5.xml
(change name, uuid, file location, mac address(es))
# virsh define /tmp/v5.xml
Domain v5 defined from /tmp/v5.xml
# guestfish -i -d v5
Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.
Operating system: CentOS release 6.5 (Final)
/dev/vg_v1/lv_root mounted on /
/dev/sda1 mounted on /boot
> edit /etc/sysconfig/network
(change hostname)
> edit /etc/sysconfig/network-scripts/ifcfg-eth0
> edit /etc/sysconfig/network-scripts/ifcfg-eth1
(for nics comment out any UUID lines, change hardware address, change IP)
> edit /etc/udev/rules.d/70-persistent-net.rules
(change hardware address)
> exit
# virsh start v5
Domain v5 started
==== Tests ====
^ Melt Lammps LJ problem, 10000 steps with 32000 atoms Loop Times (secs) ^^^^^^^^^
| 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | nr of jobs |
| 556 | 560 | 588 | 625 | 760 | 936 | 1122 | 1300 | |
| (linear) |||| 781 | 938 | 1094 | 1250 | |
| (n35:load 32) |||| 726 | | | | |
| (hyperthreading) |||| | | | ???? | 16 logical cores |
16 VMs running in queue ''testvm'' on dual quad core PE2950 (8 cores, 16 GB ram)
* First ran on n35 (32 core node under hyperthreading will full load) with an average loop time 726 secs.
* As I submit jobs to the VMs they perform well up to 8 jobs (one job per core; dual quad core node).
* That is with the KVM overhead
* Assuming a linear penalty for over committing, 16 jobs is expected to take Loop times of 1250 secs.
* However after 10 jobs we're surpassing that penalty threshold
* And then I was to turn on Hyperthreading creating 16 logical cores
* To my dismay this chipset does not support that, bummer!
* Was expecting to gain some performance back ...
* Maybe try on newer hardware when idle ...
But we learned Xen and KVM setups, and
- We can now support a heterogeneous environment if we wanted (Suse, Scientific Linux, Windows (eh, what?))
- Use a KVM environment up to the number of cores on a box without penalty
- And change the mix of nodes if needed (more/less cores per node, memory size etc)
- Still, not an answer for my "high core count/low memory footprint" problem, see [[cluster:133|high core count/low memory footprint]]
\\
**[[cluster:0|Back]]**