User Tools

Site Tools


cluster:97

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
cluster:97 [2012/02/16 14:07]
hmeij [Sample]
cluster:97 [2012/02/16 14:09] (current)
hmeij
Line 4: Line 4:
 ==== Summary ==== ==== Summary ====
  
-The purpose of this testing is to find out how fast the storage systems respond either directly attached to compute nodes, or attached via ehternet ​(gigabit ethernet) or infiniband (SDR via queue imw or QDR via queue hp12). ​ When using infiniband interconnects we use IPoIB (IP traffic over infiniband interconnects which theoretically might be 3-4 times faster than ethernet).+The purpose of this testing is to find out how fast the storage systems respond either directly attached to compute nodes, or attached via ethernet ​(gigabit ethernet) or infiniband (SDR via queue imw or QDR via queue hp12). ​ When using infiniband interconnects we use IPoIB (IP traffic over infiniband interconnects which theoretically might be 3-4 times faster than ethernet).
  
-So, nothing beats directly attached storage ofcourse (scenario: fastlocal.dell.out),​ the attached disk arrays on compute nodes in the ehwfd queue. ​ Each node is presented with 230 gb of dedicated disk space provide ​by seven 10K disks using Raid 0 (all drives read and write simutaneously).  IOZone suite finished in an hour.+So, nothing beats directly attached storage ofcourse (scenario: fastlocal.dell.out ​below), the attached disk arrays on compute nodes in the ehwfd queue. ​ Each node is presented with 230 gb of dedicated disk space provided ​by seven 10K disks using Raid 0 (all drives read and write simultaneously).  IOZone suite finished in an hour.
  
-However, that queue may be a bottle neck (only 4 compute nodes in the ehwfd queue) or perhaps 230 GB is not enough (for 8 job slots). ​ So one alternative is the use of MYSANSCRATCH in your submit scripts. ​ MYSANSCRATCH refers to a directory made for you by the scheduler at location /​sanscratch/​JOBPID which is a Raid 5 filesystem of 5 TB provided by 5 disks. ​ IOZone suite was done in 2 hrs 45 mins (scenario: san.hp.out).+However, that queue may be a bottle neck (only 4 compute nodes in the ehwfd queue) or perhaps 230 GB is not enough (for 8 job slots). ​ So one alternative is the use of MYSANSCRATCH in your submit scripts. ​ MYSANSCRATCH refers to a directory made for you by the scheduler at location /​sanscratch/​JOBPID which is a Raid 5 filesystem of 5 TB provided by 5 disks spinning at 7.2K.  IOZone suite was done in 2 hrs 45 mins (scenario: san.hp.out).
  
 For an example of using MYSANSCRATCH,​ look at the bottom of this page.  You will have to stage your data in the directory provided and copy the results back to your home directory when finished. ​ The scheduler will remove the directory. ​ For an example of using MYSANSCRATCH,​ look at the bottom of this page.  You will have to stage your data in the directory provided and copy the results back to your home directory when finished. ​ The scheduler will remove the directory. ​
cluster/97.txt ยท Last modified: 2012/02/16 14:09 by hmeij