This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
cluster:88 [2010/08/03 15:23] hmeij |
cluster:88 [2010/08/12 14:55] hmeij |
||
---|---|---|---|
Line 47: | Line 47: | ||
There are very few policies on our clusters. Use disk space as needed and archive data elsewhere. | There are very few policies on our clusters. Use disk space as needed and archive data elsewhere. | ||
+ | |||
+ | ===== Step 1 ===== | ||
+ | |||
+ | Download, MD check sum, and burn following ISOs to disc. | ||
+ | |||
+ | * Kusu Gong Gong Release version 1.1 on the x86_64 architecture. | ||
+ | * http:// | ||
+ | * It states other kits are included, but did not find them. | ||
+ | |||
+ | * Other kits: Lava, Ganglia, NTop, Cacti | ||
+ | * http:// | ||
+ | |||
+ | * CentOS kiy: http:// | ||
+ | * http:// | ||
+ | * we have 5.3, for 5.5 you'll need the 1of8 series isos | ||
+ | |||
+ | I recommend check summing the files. Had trouble with these files downloading cleanly. | ||
+ | |||
+ | ===== Step 2 ===== | ||
+ | |||
+ | * Select an installer node, insert Kusu Installer into CD/DVD, and connect device via USB ports. | ||
+ | * Installer node, and 2-3 compute nodes, must have the purple cable connecting eth0 (bottom port) to rack ProCurve switch (top one). If you wish, you can cable top port (eth1) into bottom switch for testing, but this is not necessary. | ||
+ | * Boot installer node, hit F2 to Enter BIOS, traverse to menu tab Boot and make sure both CDROM and Removable Device are listed before any other options like hard disk and network cards, hit F10, save changes and exit/ | ||
+ | * Next you should see the Project Kusu splash page with the orange lego turtle; when prompted type ' | ||
+ | * Navigation around these screens is Tab/Enter and arrow keys. | ||
+ | * Next come the informational screens, in order | ||
+ | * 1 - language: English | ||
+ | * 2 - keyboard: us | ||
+ | * 3 - network, configure each interface, edit and configure two private networks (for Pace we'll reset eth1 on installer node later on for public access), this is so that the cluster is not accessible from outside and we could separate provision from private (NFS data/MPI) traffic. Edit: | ||
+ | * eth0: 192.168.101.254/ | ||
+ | * name: kusu101prov, | ||
+ | * eth1: 10.10.101.254/ | ||
+ | * name: kusupriv, type: other | ||
+ | * 4 - gateway & dns: gateway 192.168.101.0 (is not used but required field), dns server 192.168.101.254 (installer node) | ||
+ | * 5 - host: FQDN kusu101, PCD kusu101 (basically we will not provide internet accessible names) | ||
+ | * 6 - time: American/ | ||
+ | * 7 - root password: password (keep simple for now, change later) | ||
+ | * 8 - disk partitions: select 'Use Default' | ||
+ | * edit /home downsize to 1024 (Pace may want to leave this much larger and create a 1 GB / | ||
+ | * add a logical volume | ||
+ | * mount point / | ||
+ | * label LOCALSCRATCH | ||
+ | * size: leave blank, see below | ||
+ | * type ext3, on hda, check "fill remaining space on disk" (!only one partition can have this setting!) | ||
+ | * 9 - confirm: accept (at this point the disk gets reformatted) | ||
+ | * 10 - kits: select Add, insert kit cd, wait, cycle through disks by kit, then No More Kits, then Finish (node reboots). | ||
+ | |||
+ | Upon reboot (enter BIOS and reset boot to hard disk first) check some command output: hostname, route, ifconfig, bhosts, bqueues | ||
+ | |||
+ | ===== Step 3 ===== | ||
+ | |||
+ | * first create network interfaces for nodes, different from installer network interfaces | ||
+ | * type ' | ||
+ | * ' | ||
+ | * network: 192.168.0.0 | ||
+ | * subnet: 255.255.0.0 | ||
+ | * gateway: 192.168.101.0 | ||
+ | * device: eth0 | ||
+ | * starting IP: 192.168.101.250 | ||
+ | * suffix: -eth0 | ||
+ | * increment: -1 (that' | ||
+ | * options: | ||
+ | * description: | ||
+ | * ' | ||
+ | * ' | ||
+ | |||
+ | * next we are going to create our nodegroup template for the compute nodes, type ' | ||
+ | * use ' | ||
+ | * general: change name Copy 1 to _BSS with format node#NN (we don't care about rack and like short names) | ||
+ | * repository: there is only one, select it | ||
+ | * boot time: | ||
+ | * components: (check that non-server/ | ||
+ | * networks: here select only the interfaces you create: nodeprov eth0 and nodepriv eth1 | ||
+ | * optional: do select vim* and emacs* packages (annoying) | ||
+ | * partition: resize /data to 1024 and add partition / | ||
+ | * cfmsync: update, no | ||
+ | * ' | ||
+ | |||
+ | * now we're ready to add compute nodes. type ' | ||
+ | * if you receive an error about MySQLDB not found in 10-cacti.py one of two situations we have encountered | ||
+ | * mysql was not installed, add an initialize database | ||
+ | * grep mysql / | ||
+ | * / | ||
+ | * 'mysql -u root' should work | ||
+ | * and/or python is missing a driver | ||
+ | * yum install MySQL-python | ||
+ | * when addhost starts, select the *_BSS nodegroup created, and eth0 interface | ||
+ | * make sure blades have purple cable in bottom interface, turn blade on | ||
+ | * if you know blade will boot of network let it go, else F2, enter BIOS, set Boot menu to network first | ||
+ | * once blade sends its eth0 IP over and receives kickstart file, move on to next blade | ||
+ | * do 2-3 blades this way | ||
+ | * once the first blade is rebooted, enter BIOS, set boot menu to hard disk | ||
+ | * there' | ||
+ | * once the last blade has fully booted of the hard disk quit addhost on installer node | ||
+ | * addhost will now push new files to all the members of the cluster using cfmsync | ||
+ | |||
+ | Issue ' | ||
+ | |||
+ | ===== Step 4 ===== | ||
+ | |||
+ | Ugly step. If you look at /etc/hosts you'll see what we mean. All blade host names should be unique, so we're going to fix some files. | ||
+ | |||
+ | * first installer ' | ||
+ | * copy /etc/hosts to / | ||
+ | * put installer lines together, put 192.168 and 10.10 lines together for an easier read | ||
+ | * for 10.10 remove all short host names like ' | ||
+ | * for 192.168 add ' | ||
+ | * leave all the other host names intact (*.kusu101, *-eth0, etc) | ||
+ | * copy hosts-good across hosts file | ||
+ | |||
+ | * next do the same for hosts.pdsh but only use short host names, one name per node | ||
+ | |||
+ | * next do the same for / | ||
+ | |||
+ | * next edit / | ||
+ | * cp / | ||
+ | * cp / | ||
+ | * cp / | ||
+ | |||
+ | * in / | ||
+ | * link in all the *-good files at appropriate locations | ||
+ | * make the rc.d directory at appropriate level and link in rc.local | ||
+ | * run ' | ||
+ | * on installer node run '/ | ||
+ | * 'pdsh uptime' | ||
+ | * ' | ||
+ | * ' | ||
+ | |||
+ | Now reboot the entire cluster and observe changes to be permanent. Sidebar: for Pace, you can now on the installer node assign eth1 a pace.edu IP, and have the necessary changes made to the ProCurve switch, so your users can log into the installer/ | ||
+ | |||
+ | Actually had a better idea: create another node group template from your _BSS template and remove eth1, naming convention login#N and set starting IP to something like 192.168.101.10 ... call this node group _BSS_login or so. Start addhost, add new host to this node group. | ||
+ | |||
+ | ===== Step 5 ===== | ||
+ | |||
+ | Fun step. | ||
+ | |||
+ | * make a backup copy of / | ||
+ | * edit file, delete everything but queue ' | ||
+ | * (if you rename queue normal you also need to edit lsb.params and define default queue) | ||
+ | * remove most queue definitions and set the following | ||
+ | * QJOB_LIMIT = 4 (assuming you have 2 nodes in cluster, 6 if you have 3, iow #nodes * #cores) | ||
+ | * UJOB_LIMIT = 1000 (user like to write scripts and submit jobs, this protects from runaway scripts) | ||
+ | * INTERACTIVE = no (only batch is allowed) | ||
+ | * EXCLUSIVE = Y (allow the bsub -x flag) | ||
+ | * PRE_EXEC = / | ||
+ | * POST_EXEC = / | ||
+ | * make the directories /home/apps (for compiled software) | ||
+ | * make the directory /home/lava and / | ||
+ | * be sure / | ||
+ | * create the pre/post exec files (post does an rm -rf against the created directories) | ||
+ | * for example: | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | if [" | ||
+ | mkdir -p / | ||
+ | sleep 5; exit 0 | ||
+ | else | ||
+ | echo " | ||
+ | exit 111 | ||
+ | fi | ||
+ | </ | ||
+ | |||
+ | * ' | ||
+ | * ' | ||
+ | |||
+ | Now we're ready to submit a serial jobs. As a non-privilege user create two files: | ||
+ | |||
+ | * run | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | |||
+ | rm -f out err job3.out | ||
+ | |||
+ | #BSUB -q normal | ||
+ | #BSUB -J test | ||
+ | #BSUB -n 1 | ||
+ | #BSUB -e err | ||
+ | #BSUB -o out | ||
+ | |||
+ | export MYSANSCRATCH=/ | ||
+ | export MYLOCALSCRATCH=/ | ||
+ | |||
+ | cd $MYLOCALSCRATCH | ||
+ | pwd | ||
+ | cp ~/job.sh . | ||
+ | time job.sh > job.out | ||
+ | |||
+ | cd $MYSANSCRATCH | ||
+ | pwd | ||
+ | cp $LOCALSCRATCH/ | ||
+ | |||
+ | cd | ||
+ | pwd | ||
+ | cp $MYSANSCRATCH/ | ||
+ | </ | ||
+ | |||
+ | * job.sh | ||
+ | * | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | |||
+ | sleep 10 | ||
+ | echo Done sleeping. | ||
+ | |||
+ | for i in `seq 1 100` | ||
+ | do | ||
+ | date | ||
+ | done | ||
+ | |||
+ | </ | ||
+ | |||
+ | * 'bsub < run' (submits) | ||
+ | * ' | ||
+ | |||
+ | |||
+ | ===== Step 6 ===== | ||
+ | |||
+ | More fun. Parallel jobs can be submitted over ethernet interconnects but will not achieve the performance of Infiniband interconnects ofcourse. | ||
+ | |||
+ | * yum install libibverbs | ||
+ | * pdsh yum install libibverbs -q -y | ||
+ | * yum install gcc-c++ | ||
+ | |||
+ | On our Dell cluster we have static pre-compiled flavors of MPI and OFED. A tarball of 200 MB can be found here [[hhttp:// | ||
+ | |||
+ | * download tarball, stage in / | ||
+ | * cd /opt; tar zxvf / | ||
+ | * pdsh "cd /opt; tar zxvf / | ||
+ | * examples in / | ||
+ | * export PATH=/ | ||
+ | * export LD_LIBRARY_PATH=/ | ||
+ | * cd / | ||
+ | * ./ring.c; ./hello.c (to test, it'll complain about no HCA card) | ||
+ | |||
+ | Ok, so now we need write a script to submit a parallel job. A parallel job is submitted with command ' | ||
+ | |||
+ | * irun | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | |||
+ | rm -f err out | ||
+ | |||
+ | #BSUB -e err | ||
+ | #BSUB -o out | ||
+ | #BSUB -n 4 | ||
+ | #BSUB -q normal | ||
+ | #BSUB -J ptest | ||
+ | |||
+ | export PATH=/ | ||
+ | export LD_LIBRARY_PATH=/ | ||
+ | |||
+ | echo "make sure we have the right mpirun" | ||
+ | which mpirun | ||
+ | |||
+ | / | ||
+ | |||
+ | / | ||
+ | |||
+ | </ | ||
+ | |||
+ | * 'bsub < irun' (submits) | ||
+ | * ' | ||
+ | |||
+ | ===== Step 7 ===== | ||
+ | |||
+ | Tools. As you add nodes, monitoring tools are added to Ganglia and Cacti. | ||
+ | |||
+ | But first we must fix firefox. | ||
+ | |||
+ | * ' | ||
+ | * ' | ||
+ | * http:// | ||
+ | * http:// | ||
+ | * http:// | ||
+ | |||
+ | * http:// | ||
+ | * http:// | ||
\\ | \\ | ||
**[[cluster: | **[[cluster: |