User Tools

Site Tools


cluster:93

Warning: Undefined array key -1 in /usr/share/dokuwiki/inc/html.php on line 1458

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster:93 [2011/01/11 10:59]
hmeij
cluster:93 [2011/01/11 15:55] (current)
hmeij
Line 77: Line 77:
 ===== SSH Keys ===== ===== SSH Keys =====
  
-You can also log in to host greentail directly (''ssh username@greentail.wesleyan.edu''). +You can log in to host greentail directly (''ssh username@greentail.wesleyan.edu''). VPN required for off campus access.
  
-Within the directory **/home/username/.ssh** there is a file named **kauthorized_keys**.  Within this file are public SSH keys.  Because your home directory contents are copied over to host greentail, you should be able to ssh from host petaltail or swallowtail to host greentail without a password prompt.  If not, your keys are not set up properly.  You may need to add your ''id_rsa.pub'' content to this file.+Within the directory **/home/username/.ssh** there is a file named **authorized_keys**.  Within this file are public SSH keys.  Because your home directory contents are copied over to host greentail, you should be able to ssh from host petaltail or swallowtail to host greentail without a password prompt.  If not, your keys are not set up properly.  You may need to add your ''id_rsa.pub'' content to this file.
  
 Note: the software stack on host petaltail (administrative server) creates ssh keys for you automatically upon your first login, so for most of you this is all set.  To set up your private/public ssh keys manually: Note: the software stack on host petaltail (administrative server) creates ssh keys for you automatically upon your first login, so for most of you this is all set.  To set up your private/public ssh keys manually:
Line 88: Line 88:
   * you can have multiple public ssh key entries in this file   * you can have multiple public ssh key entries in this file
  
-The file **known_hosts** contains server level ssh keys.  This is necessary for MPI programs to log into compute nodes withouot a password prompt and submit your jobs.  That file has been prepped for you.+The file **known_hosts** contains server level ssh keys.  This is necessary for MPI programs to log into compute nodes without a password prompt and submit your jobs.  That file has been prepped for you.
  
  
Line 110: Line 110:
 Previously there were two scratch areas available to your programs: /localscratch which is roughly 50 GB on each node's local hard disk and /sanscratch a shared scratch area available to all nodes.  Sanscratch allows you to monitor your jobs progress by looking in /sanscratch/jobpid. It was also much larger (1 TB). Previously there were two scratch areas available to your programs: /localscratch which is roughly 50 GB on each node's local hard disk and /sanscratch a shared scratch area available to all nodes.  Sanscratch allows you to monitor your jobs progress by looking in /sanscratch/jobpid. It was also much larger (1 TB).
  
-However, since our fantastic crash of June 2008 ([[cluster:67|The catastrophic crash of June 08]] /snapshot was simply a directory inside /home and thus compete for disk space.+However, since our fantastic crash of June 2008 ([[cluster:67|The catastrophic crash of June 08]] page) /sanscratch was simply a directory inside /home and thus competes for disk space and IO.
  
-On greentail, /sanscratch will be a separate logical volume of 5 TB using a different disk set.  SO i urge those that have very large files to stage their files in /sanscratch when running their jobs for best performance.  The scheduler will always create (and delete!) two directories for you.  The JOBPID of your job is used to create /localscratch/jobpid and /sanscratch/jobpid.+On greentail, /sanscratch will be a separate logical volume of 5 TB using a different disk set.  So I urge those that have very large files, or generate lots of IO, to stage their files in /sanscratch/jobid when running their jobs for best performance.  The scheduler will always create (and delete!) two directories for you.  The JOBPID of your job is used to create /localscratch/jobpid and /sanscratch/jobpid.
  
 ===== MPI ===== ===== MPI =====
Line 118: Line 118:
 For those of you running MPI or MPI enabled applications, you will need to make some changes to your scripts.  The ''wrapper'' program to use with greentail's Lava scheduler is the same as for cluster sharptail. It can be found here:  /share/apps/bin/lava.openmpi.mpirun.   If other flavors are desired, you can inform me or look look at the example scripts lava.//mpi_flavor//.mpi[run|exec]. For those of you running MPI or MPI enabled applications, you will need to make some changes to your scripts.  The ''wrapper'' program to use with greentail's Lava scheduler is the same as for cluster sharptail. It can be found here:  /share/apps/bin/lava.openmpi.mpirun.   If other flavors are desired, you can inform me or look look at the example scripts lava.//mpi_flavor//.mpi[run|exec].
  
-Sometime ago I wrote some code to detect if a node is infiniband enabled or not, and based on the result, add command line arguments to the mpirun invocation.  If you use that code, you will need to change:  the path to obtain the port status (/usr/bin/ibv_devinfo) and in the block specify the interface change eth1 to ib0.+Sometime ago I wrote some code to detect if a node is infiniband enabled or not, and based on the result, add command line arguments to the mpirun invocation.  If you use that code, you will need to change:  the path to obtain the port status (/usr/bin/ibv_devinfo) and in the block specifing the interface change eth1 to ib0
 + 
 +===== Software ===== 
 + 
 +The same /share/apps software directory build when we were working petaltail as a new administrative host has been copied to greentail.  Petaltail is redhat linux 5.1 while greentail is redhat linux 5.5.  I anticipate everything to work but we may encounter missing libraries (major/minor versions).
  
 ===== ... ===== ===== ... =====
cluster/93.1294761596.txt.gz · Last modified: 2011/01/11 10:59 by hmeij