User Tools

Site Tools


cluster:132

Table of Contents


Back

LXC Linux Containers

Ok, virtualization again. Trying this approach on a Dell PowerEdge 2950.

Starting with the latter.

When you get the SElinux policy, create the *.te file then

[root@petaltail ~]# vi lxc.te
[root@petaltail ~]# semodule -l | grep lxc
[root@petaltail ~]# checkmodule -M -m -o lxc.mod lxc.te
checkmodule:  loading policy configuration from lxc.te
checkmodule:  policy configuration loaded
checkmodule:  writing binary representation (version 10) to lxc.mod
[root@petaltail ~]# semodule_package -o lxc.pp -m lxc.mod
[root@petaltail ~]# semodule -i lxc.pp
[root@petaltail ~]# semodule -l | grep ^lx
lxc     1.0

Ah, I'm simply going to CentOS-7 because LXC is fully integrated

Ah, no. Took a look at v7 and it's very different with the over engineering part of system control. Will learn later, back to v6.5. I also want to preserve the ability to run different OS flavors which is not possible under LXC containers.

KVM

KVM is hardware level virtualization as opposed to my previous meanderings into the world of software level virtualization (Xen Virtual HPCC services). There is basically a “dom0” domain manager running keeping track of virtual images that run from block devices written onto a block device.

First I create my v1 clone with virt-install then build my v1 clone the way I wanted it and then customized it manually (/etc/fstab, openlava scheduler, /etc/passwd, /etc/shadow, /etc/group and /etc/hosts, etc). Added a second bridge (br1 is public/internet, previously created br0 is for private network for scheduler). That was done via virt-manager GUI (View VM Details of vm and then select Add Hardware). br1 allows me to run yum update before I clone. Power this clone up, add to a test queue, and submit some jobs to make sure it all works. Then clone.

# create v1, my "base" clone: 8 gb hdd, 1 gb ram, centos 6.5, connect to ethernet bridge ''br0''

virt-install --connect=qemu:///system -n v1 -r 1024 --vcpus=1 \
--disk path=/var/lib/libvirt/images/v1.img,size=8 \
-c /var/lib/libvirt/images/CentOS-6.5-x86_64-bin-DVD1.iso \
--vnc --noautoconsole --os-type linux --os-variant rhel6 --accelerate \
--network=bridge:br0 --hvm

Here are the steps for cloning v1 to v5. You'll need guestfish part of the libguestfs tools allowing you to edit content inside the block level file. Very nifty.

  • yum install libguestfs-tools-c

Next we'll create the v5.img block device, dump the v1 config into v5.xml and then edit that file. UUID and Mac Address we'll edit and the last 2 characters we change to '05' (make it unique). Name becomes v5 from v1 and we'll similarly adjust the block device location/filename. Then we define this as domain v5.

Launch guestfish and make the same edits to Mac Address in files listed below. Then boot the vm.

# cd /var/lib/libvirt/images
# dd if=v1.img of=v5.img bs=1M

# virsh dumpxml v1 > /tmp/v5.xml
# vi /tmp/v5.xml
(change name, uuid, file location, mac address(es))

# virsh define  /tmp/v5.xml
Domain v5 defined from /tmp/v5.xml

# guestfish -i -d v5

Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.

Operating system: CentOS release 6.5 (Final)
/dev/vg_v1/lv_root mounted on /
/dev/sda1 mounted on /boot

><fs> edit /etc/sysconfig/network
(change hostname)
><fs> edit /etc/sysconfig/network-scripts/ifcfg-eth0
><fs> edit /etc/sysconfig/network-scripts/ifcfg-eth1
(for nics comment out any UUID lines, change hardware address, change IP)
><fs> edit /etc/udev/rules.d/70-persistent-net.rules
(change hardware address)
><fs> exit

# virsh start v5
Domain v5 started

Tests

Melt Lammps LJ problem, 10000 steps with 32000 atoms Loop Times (secs)
2 4 6 8 10 12 14 16 nr of jobs
556 560 588 625 760 936 1122 1300
(linear) 781 938 1094 1250
(n35:load 32) 726
(hyperthreading) ???? 16 logical cores

16 VMs running in queue testvm on dual quad core PE2950 (8 cores, 16 GB ram)

  • First ran on n35 (32 core node under hyperthreading will full load) with an average loop time 726 secs.
  • As I submit jobs to the VMs they perform well up to 8 jobs (one job per core; dual quad core node).
    • That is with the KVM overhead
  • Assuming a linear penalty for over committing, 16 jobs is expected to take Loop times of 1250 secs.
    • However after 10 jobs we're surpassing that penalty threshold
  • And then I was to turn on Hyperthreading creating 16 logical cores
    • To my dismay this chipset does not support that, bummer!
    • Was expecting to gain some performance back …
    • Maybe try on newer hardware when idle …

But we learned Xen and KVM setups, and

  1. We can now support a heterogeneous environment if we wanted (Suse, Scientific Linux, Windows (eh, what?))
  2. Use a KVM environment up to the number of cores on a box without penalty
  3. And change the mix of nodes if needed (more/less cores per node, memory size etc)
  4. Still, not an answer for my “high core count/low memory footprint” problem, see high core count/low memory footprint


Back

cluster/132.txt · Last modified: 2014/08/11 10:08 by hmeij