Title




TBD

LSF User Guide (older version 4, but still usable, nice html format :) ) [local cache of Platform Computing doc]
add pdf for quick ref guides ...







bjobs -u all            # list all user jobs
-w -u USERNAME  # wide format, w/o truncating fields.
-l  JOBID       # long listing of a given job
 



bhist -l -n 4           # list job history, going back 4 log files (if logs
are available!)
bhist -l         # show details of the job history,
		# including how much time it was spend in state
		# Pending, UserSuspend, SysSuspend, etc

bjobs -lp               # list all pending jobs and the reason why they are
pending (what they are waiting for).

bpeek            # take a peek look at the current output of your job.
		# LSF esentially capture the whole stdout of the job
		# and displays it.  The whole buffer is displayed each time
		# until the job is dead





bsub "cmd option"       # submit job to be run as batch process
		# LSF will execute the command in a select remote host (LSF client).  
		# It will duplicate the env, including
		# the current working dir, env var, etc
		# Obviously, depends on unanimous NFS setup on all host

		# anyway to reduce priority w/o creating new queues?


bstop 		# send STOP signal to the job
		# LSF essentially do kill -STOP on the dispatched
		# process and all if the children
		# ps -efl will show "T" for these processes.
bstop 0			# stop all jobs owned by the current user

bresume 		# send CONTinue signal to the job
bresume 0 		# resume all jobs

brequeue 	# Re-Queue a job.  LSF essentially kill the job
		# and re-submit it, which will likely run on a different host
		# Thus, it really depend on process that can be restarted gracefully.
		# eg, setiathome will restart by reading state saved in xml files.


lsload			# show loads
bqueues			# list all queues
lsinfo
bhosts
bparams

lshosts			# display LSF "server nodes" and clients
lshosts -l | egrep '(ENABLED|HOST_NAME)'	
		# which which hosts has checked out license
		# master node can be proxy that dolls out license to clients
		# all license expires in midnite.
					



ENV setup

Admin/Accounting


bacct 			# display stats on finished LSF jobs.
-u username

bacct -u all -C 3/1,3/31		# display aggregate job stats for all users for month of march.
					# -C for completed (and exited) jobs.
bacct -u all -S 2007/1/1,2007/3/31	# similar to above, but for Submitted jobs, with year explicityly defined.



PVM

http://www.csm.ornl.gov/pvm/



source pvm.env           # get PVM_ROOT, etc
pvm                            # starts monitor, starting pvmd* daemon if needed.

$PVM_ROOT/lib/pvmd   pvmhost.conf
# starts PVM daemon on lists specified in the conf file, whereby hosts is listed one per line.
# may want to put it in background.  ^C will end everything.
# it uses RSH (ssh if defined correctly) to login to remote host to start process
# Need to ensure ssh login will source the env correctly for pvm/pvmd to run.
# Can be started by any user   (what about more than one user??)


kill -SIGTERM    can be used to kill the daemon
if use kill -9 (or other non-catchable signal), be sure to clean up /tmp/pvmd.


pvm> commands
ps 
conf
halt
exit


To run OpenEye omega/rocs job, the 
$PVM_ROOT/bin/$PVM_ARCH dir must have access to the desired binary (eg sym link to omega).
PATH from .login will not be sourced.

run command as:
omega  -pvmconf omega.pvmconf -in carboxylic_acids_1--100.smi -out carboxylic_acids_1--100.oeb.gz -log omega_pvm.log


Each user that start pvm will have her own independent instance of pvmd3.
pvm rsh/ssh to remote host to start itself, so ports numbers are likely not going to be static.  
It uses UDP for communication.

										   
from lsof -i4 -n
										   
process name / pid uid   ...
pvmd3     27808    tinh    7u  IPv4 17619158       UDP 10.220.3.20:33430
pvmd3     27808    tinh    8u  IPv4 17619159       UDP 10.220.3.20:33431


tin     27808     1  0 14:25 pts/29   00:00:00 /app/pvm/pvm345/lib/LINUX/pvmd3

## omega.pvmconf
## host = req keyword
## hostname, sometime may need to be FQDN, depending on what command "hostname" returns
## n = number of instance of PVM to run
host  phpc-cn01 1
host  phpc-cn02 2
host  phpc-cn03 2

##/home/common/Environments/pvm.env

# csh environment setup for PVM 3.4.5
# currently only available for LINUX64 (LSF cluster)

setenv PVM_ROOT /app/pvm/pvm345

source ${PVM_ROOT}/lib/cshrc.stub

# http://mail.hudat.com/~ken/help/unix/.cshrc
#alias ins2path  'if ("$path:q" !~ *"\!$"* ) set path=( \!$ $path )'
#alias add2path  'if ("$path:q" !~ *"\!$"* ) set path=( $path \!$ )'
##add2path ${PVM_ROOT}/bin

## : has special meaning in cshrc, so need to escape it for it to be taken verbatim
## there is no auto shell conversion between $manpath and $MANPATH as it does for PATH
## csh is convoluted.
setenv MANPATH $MANPATH\:${PVM_ROOT}/man




[Doc URL: http://www.cs.fiu.edu/~tho01/psg/]
(cc) Tin Ho. See main page for copyright info.


"LYS on the outside, LKS in the inside"
"AUHAUH on the outside, LAPPLAPP in the inside"