This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cluster:93 [2011/01/11 16:43] hmeij |
cluster:93 [2011/01/11 20:55] (current) hmeij |
||
---|---|---|---|
Line 112: | Line 112: | ||
However, since our fantastic crash of June 2008 ([[cluster: | However, since our fantastic crash of June 2008 ([[cluster: | ||
- | On greentail, /sanscratch will be a separate logical volume of 5 TB using a different disk set. So I urge those that have very large files, or generate lots of IO, to stage their files in /sanscratch when running their jobs for best performance. | + | On greentail, /sanscratch will be a separate logical volume of 5 TB using a different disk set. So I urge those that have very large files, or generate lots of IO, to stage their files in /sanscratch/ |
===== MPI ===== | ===== MPI ===== | ||
Line 118: | Line 118: | ||
For those of you running MPI or MPI enabled applications, | For those of you running MPI or MPI enabled applications, | ||
- | Sometime ago I wrote some code to detect if a node is infiniband enabled or not, and based on the result, add command line arguments to the mpirun invocation. | + | Sometime ago I wrote some code to detect if a node is infiniband enabled or not, and based on the result, add command line arguments to the mpirun invocation. |
+ | |||
+ | ===== Software ===== | ||
+ | |||
+ | The same /share/apps software directory build when we were working petaltail as a new administrative host has been copied to greentail. | ||
===== ... ===== | ===== ... ===== |