This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:132 [2014/07/10 14:11] hmeij [LXC Linux Containers] |
cluster:132 [2014/08/11 14:08] (current) hmeij [Tests] |
||
---|---|---|---|
Line 30: | Line 30: | ||
- | Ah, I'm simply going to CentOS7 | + | Ah, I'm simply going to CentOS-7 |
+ | * [[http:// | ||
+ | Ah, no. Took a look at v7 and it's very different with the over engineering part of system control. | ||
+ | |||
+ | ==== KVM ==== | ||
+ | |||
+ | KVM is hardware level virtualization as opposed to my previous meanderings into the world of software level virtualization (Xen [[cluster: | ||
+ | |||
+ | * Followed the primary setup on this page | ||
+ | * http:// | ||
+ | * Once '' | ||
+ | * http:// | ||
+ | * for future reference, more details | ||
+ | * http:// | ||
+ | * But once I got to the cloning part, I found this to be super easy and scritpable | ||
+ | * http:// | ||
+ | |||
+ | First I create my v1 clone with virt-install then build my v1 clone the way I wanted it and then customized it manually (/ | ||
+ | |||
+ | < | ||
+ | |||
+ | # create v1, my " | ||
+ | |||
+ | virt-install --connect=qemu:/// | ||
+ | --disk path=/ | ||
+ | -c / | ||
+ | --vnc --noautoconsole --os-type linux --os-variant rhel6 --accelerate \ | ||
+ | --network=bridge: | ||
+ | |||
+ | </ | ||
+ | |||
+ | Here are the steps for cloning v1 to v5. You'll need '' | ||
+ | |||
+ | * yum install libguestfs-tools-c | ||
+ | |||
+ | Next we'll create the v5.img block device, dump the v1 config into v5.xml and then edit that file. UUID and Mac Address we'll edit and the last 2 characters we change to ' | ||
+ | |||
+ | Launch '' | ||
+ | |||
+ | < | ||
+ | |||
+ | # cd / | ||
+ | # dd if=v1.img of=v5.img bs=1M | ||
+ | |||
+ | # virsh dumpxml v1 > /tmp/v5.xml | ||
+ | # vi /tmp/v5.xml | ||
+ | (change name, uuid, file location, mac address(es)) | ||
+ | |||
+ | # virsh define | ||
+ | Domain v5 defined from /tmp/v5.xml | ||
+ | |||
+ | # guestfish -i -d v5 | ||
+ | |||
+ | Welcome to guestfish, the libguestfs filesystem interactive shell for | ||
+ | editing virtual machine filesystems. | ||
+ | |||
+ | Operating system: CentOS release 6.5 (Final) | ||
+ | / | ||
+ | /dev/sda1 mounted on /boot | ||
+ | |||
+ | >< | ||
+ | (change hostname) | ||
+ | >< | ||
+ | >< | ||
+ | (for nics comment out any UUID lines, change hardware address, change IP) | ||
+ | >< | ||
+ | (change hardware address) | ||
+ | >< | ||
+ | |||
+ | # virsh start v5 | ||
+ | Domain v5 started | ||
+ | |||
+ | </ | ||
+ | |||
+ | ==== Tests ==== | ||
+ | |||
+ | |||
+ | |||
+ | ^ Melt Lammps LJ problem, 10000 steps with 32000 atoms Loop Times (secs) ^^^^^^^^^ | ||
+ | | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | nr of jobs | | ||
+ | | 556 | 560 | 588 | 625 | 760 | 936 | 1122 | ||
+ | | (linear) | ||
+ | | (n35:load 32) |||| 726 | | | | | | ||
+ | | (hyperthreading) | ||
+ | |||
+ | |||
+ | 16 VMs running in queue '' | ||
+ | |||
+ | * First ran on n35 (32 core node under hyperthreading will full load) with an average loop time 726 secs. | ||
+ | * As I submit jobs to the VMs they perform well up to 8 jobs (one job per core; dual quad core node). | ||
+ | * That is with the KVM overhead | ||
+ | * Assuming a linear penalty for over committing, 16 jobs is expected to take Loop times of 1250 secs. | ||
+ | * However after 10 jobs we're surpassing that penalty threshold | ||
+ | * And then I was to turn on Hyperthreading creating 16 logical cores | ||
+ | * To my dismay this chipset does not support that, bummer! | ||
+ | * Was expecting to gain some performance back ... | ||
+ | * Maybe try on newer hardware when idle ... | ||
+ | |||
+ | But we learned Xen and KVM setups, and | ||
+ | |||
+ | - We can now support a heterogeneous environment if we wanted (Suse, Scientific Linux, Windows (eh, what?)) | ||
+ | - Use a KVM environment up to the number of cores on a box without penalty | ||
+ | - And change the mix of nodes if needed (more/less cores per node, memory size etc) | ||
+ | - Still, not an answer for my "high core count/low memory footprint" | ||
\\ | \\ |