General Information | Environment | Executables | Scripts | Note from Peter Green
These are the instructions on running TWIST stuff on the Ualberta THOR linux cluster.
Most of the general information and the PBS instructions are from Peter Green. Many details are from Rob MacDonald.
Main web page: http://thor-gw.phys.ualberta.ca/
How to login into the cluster? Use ssh
e614@thor-gw.phys.ualberta.ca, password: the usual
What disks to use? From Peter Green: use /raid3/e614/yourname
Set $CAL_DB to: /raid3/e614/olchansk/caldb_ascii (use
the $HOME/caldb_ascii_update.sh script to get the latest
files from TRIUMF).
analyzegeant_batch.kcm
  You must check this file to make sure the geometry, map, and other auxilliary files match what Geant used to generate the data in the first place, or Mofia gets very confused and usually quits.
The MTIN variable is set in the PBS script below...
analyzegeant_THOR.pbs
  MOFIA_OUTDIR (the directory where Mofia
	      writes most of its output files).
	 MTIN (make sure the directory corresponds
	      to wherever you've put your Geant data).
	 DATAFILE (set this correctly if you're only
	      analyzing one file; if you're using the perl script
	      (below) to generate PBS files, this can be anything).
	 #PBS -M (put your own email address in here!)
       
       The line #PBS -l nodes=twist restricts the job to
       the two machines that TWIST actually owns, plus the 10
       general-use machines.  This line was requested by the THOR
       sysadmin.
       The other #PBS lines should be okay.
       More example PBS scripts are available here:
       /raid3/e614/olchansk/mc/scripts/*.perl
       (but this one should work fine).
writemofiabatch.pl
  analyzegeant_THOR.pbs, then edit this perl script
       to set the range of run numbers you want to analyze.  I expect
       everything else should be okay.  Running this script generates
       a PBS script for each run number, replacing the line that sets
       the DATAFILE variable.
manyqsub
  ~e614/bin) for submitting many
       batch jobs to the PBS queue.  Usage: manyqsub [filelist]manyqsub analyze_61*.pbs
       to submit my PBS jobs for runs 61xx.This script just calls qsub once for each file you list on the command line.
There is a link there to "User Documentation" which contains an
overview of PBS.  It may or may not have much useful information,
though.
The "essence" of PBS is that you submit a shell script with the "qsub"
command.  This puts the job in a queue, from where it gets taken off and
executed as nodes become available.  A typical shell script looks like
(this is one I used for hermes a while ago)
#PBS -S /bin/bash
#PBS -q extend
#PBS -l nodes=any
#PBS -m be
#PBS -M pewg@phys.ualberta.ca
cd  /shift/shd01/pewg/hmcprod/prod
./startmcprod ../thorlog/bmc1
   
        Things which start #PBS simply define PBS characteristics needed for
the job.  The ones above mean:
-S - which shell you use
-q - which queue you want to submit to.  We have effectively 3 queues
        short - 1 hour max CPU time
        long - 1 day CPU time
        extend - infinite CPU time
-l - list of resources needed.  I have only ever used the "nodes"
resource, and always specify "any".
-m - mail options - "be" means send a message at Beginning and End of
job
-M - list of people to send the mail to
        After you've got you shell script(s) set up the way you want
qsub shell_script
should do it.
        There are man pages for all of this.  Start with "man qsub" and go from
there.