This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:79 [2009/08/21 15:53] hmeij |
cluster:79 [2013/04/18 19:40] (current) hmeij |
||
---|---|---|---|
Line 2: | Line 2: | ||
**[[cluster: | **[[cluster: | ||
- | {{: | + | Deprecated. We only have rack running (on demand) offering access to 1.1 TB of memory. |
+ | --- // | ||
+ | ==== ==== | ||
+ | |||
+ | {{: | ||
+ | |||
+ | |||
+ | Update: 21 Sept 09 | ||
+ | |||
+ | Cluster sharptail has undergone some changes. | ||
+ | |||
+ | Cluster sharptail has 129 compute nodes, 258 job slots, and a total memory footprint of 2,328 GB Cluster swallowtail has 36 compute nodes, 288 job slots, and a total memory footprint of 244 GB | ||
+ | |||
+ | On cluster sharptail, four queues have been set up. Also, Gaussian optimized for AMD Opteron chips as well as a Linda version are available. | ||
+ | |||
+ | https:// | ||
+ | |||
+ | We’ll have some documentation soon, but in the meantime there are links to resources on the web. | ||
+ | |||
+ | Queue ‘bss12’ and ‘bss24’ are comprised of nodes with either 22 GB or 24 GB of memory per node (2 single core CPUs). | ||
===== Cluster: sharptail ===== | ===== Cluster: sharptail ===== | ||
+ | From the Blue Sky Studios donations of hardware we have created a high performance compute cluster. Named sharptail. | ||
+ | |||
+ | Because of limitations in the hardware, cluster sharptail can only be reached by first establishing an SSH session with petaltail or swallowtail, | ||
+ | |||
+ | The cluster consists entirely of blades, 13 per enclosure. Currently 5 enclosures are powered. | ||
+ | |||
+ | Like petaltail and swallowtail, | ||
+ | |||
+ | The cluster is created, maintained and managed with **[[http:// | ||
+ | |||
+ | The entire cluster is on utility power with no power backup for any blade, or the installer node sharptail. | ||
+ | |||
+ | |||
+ | The operating system is **[[http:// | ||
+ | |||
+ | The scheduler is **[[http:// | ||
+ | |||
+ | All nodes together add 128 job slots to our HPC environment. | ||
+ | |||
+ | As such, we will start to implement a soft policy that the Infiniband switch is dedicated to jobs that invoke MPI parallel programs. | ||
+ | |||
+ | There are still some minor configurations that need to be implemented but you are invited to put cluster sharptail to work. | ||
+ | |||
+ | This page will be updated with solutions to problems found or questions asked. | ||
+ | |||
+ | All your tools will remain on **[[http:// | ||
+ | |||
+ | ===== Questions ===== | ||
+ | |||
+ | **Are the LSF and Lava scheduler working togther?** | ||
+ | |||
+ | No. They are two physically and logically individual cluster. | ||
+ | |||
+ | **How can i determine if my program or the program i am using will work on sharptail? | ||
+ | |||
+ | When programs compile on a certain host, they link themselves against system and sometimes custom libraries. If they are missing on another host, the program will not run. To check, use ' | ||
+ | |||
+ | < | ||
+ | [hmeij@sharptail ~]$ ldd / | ||
+ | libpthread.so.0 => / | ||
+ | libdl.so.2 => / | ||
+ | libutil.so.1 => / | ||
+ | libm.so.6 => / | ||
+ | libc.so.6 => / | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | If one or more libraries are missing, you could run this command on petaltail or swallowtail, | ||
+ | |||
+ | ===== Picture ===== | ||
+ | |||
+ | Some organic wiring schema. | ||
+ | {{: | ||
- | ===== | + | ==== ==== |
\\ | \\ | ||
**[[cluster: | **[[cluster: |