User Tools

Site Tools


cluster:33


Back

Login

June 2009 … the cluster has been upgraded using a new front end node named petaltail.wesleyan.edu. The old host swallowtail.wesleyan.edu has been added and you can login and submit jobs on either host. If you change your password, please do this on host petaltail.

RSA keys

In order to log into the compute nodes without a password prompt, you must set up your RSA keys on first login on the head node (swallowtail). You will be prompted to do so. An empty pass phrase is recommended.

⇒ Tip: do not delete your ~/.ssh directory … specifically the files id_rsa and id_rsa.pub

To access the head node connect via ssh:

ssh username@swallowtail.wesleyan.edu

set up your RSA keys on first login … goes like this …

It doesn't appear that you have set up your ssh key.
This process will make the files:
     /home/username/.ssh/id_rsa.pub
     /home/username/.ssh/id_rsa
     /home/username/.ssh/authorized_keys

Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa): 
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
44:e2:4c:03:8e:0f:8e:0d:b5:9a:79:79:a1:e6:cb:da

~/.bashrc

Instead of managing the environment via /etc/profile or /etc/bashrc you're ~./bashrc looks like this (please add your customizations below the last line). If you need a csh/tcsh or ksh sexample look in /share/apps/scripts.

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Source cluster definitions
if [ -f /share/apps/scripts/cluster_bashrc ]; then
        . /share/apps/scripts/cluster_bashrc
fi

# User specific aliases and functions

Login Nodes

Upon login, you will enter you home directory on the head node swallowtail. The location is always /home/username on any node. Four other compute nodes are available for direct login to users, these are called “login nodes”. * Two login nodes (hosts: login1 and login2) are on gigabit ethernet only. * Another two nodes (hosts: ilogin1 and ilogin2) are on the infiniband switch. These “login nodes” are also members of production queues, so try not to crash them :-? It's advised that long compilations and other taxing activities be executed on these login nodes.

X Server

Basically the head node is not running an X server. We may have to in the future because of software demands. “x11 forwarding” by the SSH shell is enabled only on the head node swallowtail. This will support the viewing of graphics on remote consoles.

Debug

==== Queues ==== Two queues are available for debugging. When you submit jobs to queue debug, your job will run with relatively high priority on the login1/login2 nodes. When you submit jobs to queue idebug, your job will run with relatively high priority on the ilogin1/ilogin2 compute nodes. Please consult the 'Queues' page for detailed information.


Back

cluster/33.txt · Last modified: 2010/07/08 23:14 by hmeij