User Tools

Site Tools


cluster:132

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:132 [2014/08/07 14:08]
hmeij
cluster:132 [2014/08/11 10:08] (current)
hmeij [Tests]
Line 30: Line 30:
  
  
-Ah, I'm simply going to CentOS7 because LXC is fully integrated +Ah, I'm simply going to CentOS-7 because LXC is fully integrated 
   * [[http://www.theregister.co.uk/2014/07/07/centos_7_gm/]]   * [[http://www.theregister.co.uk/2014/07/07/centos_7_gm/]]
  
Line 36: Line 36:
  
 ==== KVM ==== ==== KVM ====
 +
 +KVM is hardware level virtualization as opposed to my previous meanderings into the world of software level virtualization (Xen [[cluster:127|Virtual HPCC services]]).  There is basically a "dom0" domain manager running keeping track of virtual images that run from block devices written onto a block device.
  
   * Followed the primary setup on this page   * Followed the primary setup on this page
Line 46: Line 48:
     * http://rwmj.wordpress.com/tag/virt-clone/     * http://rwmj.wordpress.com/tag/virt-clone/
  
-First I build my v1 clone the way I wanted it from ISO and then customized it manually. Added second bridge (br1public) later via virt-manager GUI (view the details of vm and then select Add Hardware). Power this clone up, add to a test queue, and sbumit some jobs to make it all works.  Then clone.+First I create my v1 clone with virt-install then build my v1 clone the way I wanted it and then customized it manually (/etc/fstab, openlava scheduler, /etc/passwd, /etc/shadow, /etc/group and /etc/hosts, etc). Added second bridge (''br1'' is public/internet, previously created ''br0'' is for private network for scheduler).  That was done via virt-manager GUI (View VM Details of vm and then select Add Hardware). ''br1'' allows me to run yum update before I clone. Power this clone up, add to a test queue, and submit some jobs to make sure it all works.  Then clone.
  
 <code> <code>
 +
 +# create v1, my "base" clone: 8 gb hdd, 1 gb ram, centos 6.5, connect to ethernet bridge ''br0''
  
 virt-install --connect=qemu:///system -n v1 -r 1024 --vcpus=1 \ virt-install --connect=qemu:///system -n v1 -r 1024 --vcpus=1 \
Line 58: Line 62:
 </code> </code>
  
-Here are the steps for cloning vm v1 to v5. You'll need ''guestfish' part of the libguestfs programs+Here are the steps for cloning v1 to v5. You'll need ''guestfish'' part of the libguestfs tools allowing you to edit content inside the block level file.  Very nifty.
  
   * yum install libguestfs-tools-c   * yum install libguestfs-tools-c
  
-Next we'll create the v5.img block device, dump the v1 config into v5.xml and then edit that file. UUID and Mac Address we'll edit and the last 2 characters to '05'. Name becomes v5 from v1 and we'll similarly adjust the block device location-filename. Then we define this v5 vm.+Next we'll create the v5.img block device, dump the v1 config into v5.xml and then edit that file. UUID and Mac Address we'll edit and the last 2 characters we change to '05' (make it unique). Name becomes v5 from v1 and we'll similarly adjust the block device location/filename. Then we define this as domain v5.
  
-Launch ''guesfish' and make the same edits to Mac Address in files listed below. The boot the vm.+Launch ''guestfish'' and make the same edits to Mac Address in files listed below. Then boot the vm.
  
 <code> <code>
Line 91: Line 95:
 ><fs> edit /etc/sysconfig/network-scripts/ifcfg-eth0 ><fs> edit /etc/sysconfig/network-scripts/ifcfg-eth0
 ><fs> edit /etc/sysconfig/network-scripts/ifcfg-eth1 ><fs> edit /etc/sysconfig/network-scripts/ifcfg-eth1
-(for nics comment out any UUID lines, change hardmare address, change IP)+(for nics comment out any UUID lines, change hardware address, change IP)
 ><fs> edit /etc/udev/rules.d/70-persistent-net.rules ><fs> edit /etc/udev/rules.d/70-persistent-net.rules
 (change hardware address) (change hardware address)
Line 100: Line 104:
  
 </code> </code>
 +
 +==== Tests ====
 +
 +
 +
 +^ Melt Lammps LJ problem, 10000 steps with 32000 atoms Loop Times (secs) ^^^^^^^^^
 +|  2  |  4  |  6  |  8  |  10  |  12  |  14  |  16  |  nr of jobs  |
 +|  556  |  560  |  588  |  625  |  760  |  936  |  1122    1300  |    |
 +|  (linear)  ||||  781  |  938  |  1094  |  1250  |    | 
 +|  (n35:load 32)  ||||  726  |    |    |    |    |
 +|  (hyperthreading)  ||||    |    |    |  ????  |  16 logical cores  |
 +
 +
 +16 VMs running in queue ''testvm'' on dual quad core PE2950 (8 cores, 16 GB ram)
 +
 +  * First ran on n35 (32 core node under hyperthreading will full load) with an average loop time 726 secs.
 +  * As I submit jobs to the VMs they perform well up to 8 jobs (one job per core; dual quad core node).
 +    * That is with the KVM overhead
 +  * Assuming a linear penalty for over committing, 16 jobs is expected to take Loop times of 1250 secs.
 +    * However after 10 jobs we're surpassing that penalty threshold
 +  * And then I was to turn on Hyperthreading creating 16 logical cores
 +      * To my dismay this chipset does not support that, bummer!
 +      * Was expecting to gain some performance back ...
 +      * Maybe try on newer hardware when idle ...
 +
 +But we learned Xen and KVM setups, and
 +
 +  - We can now support a heterogeneous environment if we wanted (Suse, Scientific Linux, Windows (eh, what?))
 +  - Use a KVM environment up to the number of cores on a box without penalty
 +  - And change the mix of nodes if needed (more/less cores per node, memory size etc)
 +  - Still, not an answer for my "high core count/low memory footprint" problem, see [[cluster:133|high core count/low memory footprint]]
  
 \\ \\
cluster/132.1407434908.txt.gz ยท Last modified: 2014/08/07 14:08 by hmeij