Warning: Undefined array key "DOKU_PREFS" in /usr/share/dokuwiki/inc/common.php on line 2082
cluster:88 [DokuWiki]

User Tools

Site Tools


cluster:88

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:88 [2010/08/10 14:11]
hmeij
cluster:88 [2010/08/17 15:56] (current)
hmeij
Line 79: Line 79:
     * name: kusu101prov, type: provision     * name: kusu101prov, type: provision
     * eth1: 10.10.101.254/255.255.0.0     * eth1: 10.10.101.254/255.255.0.0
-    * name: kusupriv, type: other+    * name: kusu101priv, type: other
   * 4 - gateway & dns: gateway 192.168.101.0 (is not used but required field), dns server 192.168.101.254 (installer node)   * 4 - gateway & dns: gateway 192.168.101.0 (is not used but required field), dns server 192.168.101.254 (installer node)
   * 5 - host: FQDN kusu101, PCD kusu101 (basically we will not provide internet accessible names)   * 5 - host: FQDN kusu101, PCD kusu101 (basically we will not provide internet accessible names)
Line 149: Line 149:
 Ugly step.  If you look at /etc/hosts you'll see what we mean.  All blade host names should be unique, so we're going to fix some files. Ugly step.  If you look at /etc/hosts you'll see what we mean.  All blade host names should be unique, so we're going to fix some files.
  
-  * first installer hostname command comes back with 'kusu101' +  * first installer 'hostnamecommand comes back with 'kusu101' 
-  * copy /etc/hosts to /etc/hosts-good.  edit later+  * copy /etc/hosts to /etc/hosts-good.  edit. 
-  * put installer lines together, put 192.168 and 10.10 lines together +  * put installer lines together, put 192.168 and 10.10 lines together for an easier read 
-  * for 10.10 remove all short host names either 'kusu101' or 'node00' etc +  * for 10.10 remove all short host names like 'kusu101' or 'node00' etc 
-  * for 192.168 add 'kusu101' or 'node00' etc short host names to first word after IP +  * for 192.168 add 'kusu101' or 'node00' etc short host names as the first word after IP on each line 
-  * copy +  * leave all the other host names intact (*.kusu101, *-eth0, etc) 
 +  * copy hosts-good across hosts file
  
 +  * next do the same for hosts.pdsh but only use short host names, one name per node
 +
 +  * next do the same for /etc/lava/conf/hosts, use only 192.168 IPs and only one (short) host name
 +
 +  * next edit /etc/rc.d/rc.local and add lines
 +    * cp /etc/hosts-good /etc/hosts
 +    * cp /etc/hosts.pdsh-good /etc/hosts.pdsh
 +    * cp /etc/lava/conf/hosts-good /etc/lava/conf/hosts
 +
 +  * in /etc/cfm/compute-centos5.3-5-x86_64_BSS 
 +    * link in all the *-good files at appropriate locations
 +    * make the rc.d directory at appropriate level and link in rc.local
 +  * run 'cfmsync -f'
 +  * on installer node run '/etc/init.d/lava stop', then start, and do this on nodes via pdsh
 +  * 'pdsh uptime' should now list the hosts with short name
 +  * 'bhosts' should in a little while now show the hosts as available
 +  * 'lsload' should do the same
 +
 +Now reboot the entire cluster and observe changes to be permanent. Sidebar: for Pace, you can now on the installer node assign eth1 a pace.edu IP, and have the necessary changes made to the ProCurve switch, so your users can log into the installer/head node.  You still only have 50 gb or so of home directory space but users can play around.  
 +
 +Actually had a better idea: create another node group template from your _BSS template and remove eth1, naming convention login#N and set starting IP to something like 192.168.101.10 ... call this node group _BSS_login or so.  Start addhost, add new host to this node group.  Next manually add eth1 with IP in pace.edu and wire up via switch to outside world.  Next add this host to the list of LSF_MASTER_LIST.  Now users can log into this node and submit jobs and stay out of your way on the installer node. 
 +
 +===== Step 5 =====
 +
 +Fun step.
 +
 +  * make a backup copy of /etc/lava/conf/lsbatch/lava/configdir/lsb.queues
 +  * edit file, delete everything but queue 'normal' definition
 +  * (if you rename queue normal you also need to edit lsb.params and define default queue)
 +  * remove most queue definitions and set the following
 +    * QJOB_LIMIT = 4 (assuming you have 2 nodes in cluster, 6 if you have 3, iow #nodes * #cores)
 +    * UJOB_LIMIT = 1000 (user like to write scripts and submit jobs, this protects from runaway scripts)
 +    * INTERACTIVE = no (only batch is allowed)
 +    * EXCLUSIVE = Y (allow the bsub -x flag)
 +    * PRE_EXEC = /home/apps/lava/pre_exec  (these two will create/remove the scratch dirs)
 +    * POST_EXEC = /home/apps/lava/post_exec
 +  * make the directories /home/apps (for compiled software)
 +  * make the directory /home/lava and /home/sanscratch
 +  * be sure /localscratch and /home/sanscratch have permissions like /tmp on all blades
 +  * create the pre/post exec files (post does an rm -rf against the created directories)
 +  * for example:
 +<code>
 +#!/bin/bash
 +if ["X$LSB_JOBID" != "X" ]; then
 +    mkdir -p /home/sanscratch/$LSB_JOBID /localscratch/$LSB_JOBID
 +    sleep 5; exit 0
 +else
 +    echo "LSB_JOBID NOT SET!"
 +    exit 111
 +fi
 +</code>
 +
 +  * 'badmin reconfig'
 +  * 'bqueues' should now show new configuration
 +
 +Now we're ready to submit a serial jobs.  As a non-privilege user create two files:
 +
 +  * run
 +
 +<code>
 +#!/bin/bash
 +
 +rm -f out err job3.out
 +
 +#BSUB -q normal
 +#BSUB -J test
 +#BSUB -n 1
 +#BSUB -e err
 +#BSUB -o out
 +
 +export MYSANSCRATCH=/home/sanscratch/$LSB_JOBID
 +export MYLOCALSCRATCH=/localscratch/$LSB_JOBID
 +
 +cd $MYLOCALSCRATCH
 +pwd
 +cp ~/job.sh .
 +time job.sh > job.out
 +
 +cd $MYSANSCRATCH
 +pwd
 +cp $LOCALSCRATCH/job.out job2.out
 +
 +cd
 +pwd
 +cp $MYSANSCRATCH/job2.out job3.out
 +</code>
 +
 +  * job.sh
 +  * 
 +<code>
 +#!/bin/bash
 +
 +sleep 10
 +echo Done sleeping.
 +
 +for i in `seq 1 100`
 +do
 +      date
 +done
 +
 +</code>
 +
 +  * 'bsub < run' (submits)
 +  * 'bjobs' (check dispatch)
 +
 +
 +===== Step 6 =====
 +
 +More fun. Parallel jobs can be submitted over ethernet interconnects but will not achieve the performance of Infiniband interconnects ofcourse.  OpenMPI is a nice MPI flavor because software compiled with it automatically detects if the host has an HCA card or not and will allocate the appropriate libraries. So in order to compile, or run, some OpenMPI examples we need the following:
 +
 +  * yum install libibverbs
 +  * pdsh yum install libibverbs -q -y
 +  * yum install gcc-c++
 +
 +On our Dell cluster we have static pre-compiled flavors of MPI and OFED. A tarball of 200 MB can be found here [[hhttp://lsfdocs.wesleyan.edu/mpis.tar.gz|http://lsfdocs.wesleyan.edu/mpis.tar.gz]]
 +
 +  * download tarball, stage in /home/apps/src
 +  * cd /opt; tar zxvf /home/apps/src/mpis.tar.gz
 +  * pdsh "cd /opt; tar zxvf /home/apps/src/mpis.tar.gz"
 +  * examples in /opt/openmpi/gnu/examples have been compiled like so:
 +    * export PATH=/opt/openmpi/gnu/bin:$PATH
 +    * export LD_LIBRARY_PATH=/opt/openmpi/gnu/lib:$LD_LIBRARY_PATH
 +    * cd /opt/openmpi/gnu/examples; make
 +    * ./ring.c; ./hello.c (to test, it'll complain about no HCA card)
 +
 +Ok, so now we need write a script to submit a parallel job.  A parallel job is submitted with command 'mpirun' However that command needs to know which hosts are allocated to the job.  That is done with a wrapper script located in /usr/bin/openmpi-mpirun.
 +
 +  * irun
 +
 +<code>
 +#!/bin/bash
 +
 +rm -f err out 
 +
 +#BSUB -e err
 +#BSUB -o out
 +#BSUB -n 4
 +#BSUB -q normal
 +#BSUB -J ptest
 +
 +export PATH=/opt/openmpi/gnu/bin:$PATH
 +export LD_LIBRARY_PATH=/opt/openmpi/gnu/lib:$LD_LIBRARY_PATH
 +
 +echo "make sure we have the right mpirun"
 +which mpirun
 +
 +/usr/bin/openmpi-mpirun /opt/openmpi/gnu/examples/hello_c
 +
 +/usr/bin/openmpi-mpirun /opt/openmpi/gnu/examples/ring_c
 +
 +</code>
 +
 +  * 'bsub < irun' (submits)
 +  * 'bjobs' (check status)
 +
 +===== Step 7 =====
 +
 +Tools. As you add nodes, monitoring tools are added to Ganglia and Cacti.  These are useful to look at.
 +
 +But first we must fix firefox.  You can down load a tarball here [[http://lsfdocs.wesleyan.edu/firefox.tar.gz|http://lsfdocs.wesleyan.edu/firefox.tar.gz]], stage in /usr/local/src and untar, then link the firefox executable in /usr/local/bin.
 +
 +  * 'startx' to start the gnome gui
 +  * 'firefox' to start browser
 +  * http://localhost/ganglia
 +  * http://localhost/cacti (first login, admin/admin, set a new password ... admin ;), you can let users in via guest/guest)
 +  * http://localhost:3000 (for ntop)
 +
 +  * http://localhost/[ cfm | kits | repos ] (the kits one will show all kusu commands available)
 +  * http://lsfdocs.wesleyan.edu ( lsf 6.2 guides, close to lava )
  
 \\ \\
 **[[cluster:0|Back]]** **[[cluster:0|Back]]**
cluster/88.1281463886.txt.gz · Last modified: 2010/08/10 14:11 by hmeij