This table lists the most common command, environment variables, and job specification options used by the major workload management system (adapted from: http://www.schedmd.com/slurmdocs/rosetta.html)
User Commands | Slurm (Maxwell) | HtCondor (Bird) | LSF | SGE | PBS/Torque | LoadLeveler |
---|---|---|---|---|---|---|
Job submission | sbatch [script_file] | bsub [script_file] | qsub [script_file] | qsub [script_file] | llsubmit [script_file] | |
Job deletion | scancel [job_id] | bkill [job_id] | qdel [job_id] | qdel [job_id] | llcancel [job_id] | |
Job status (by job) | squeue [-j job_id] | bjobs [job_id] | qstat -u \* [-j job_id] | qstat [job_id] | llq -u [username] | |
Job status (by user) | squeue [-u user_name] | bjobs -u [user_name] | qstat [-u user_name] | qstat -u [user_name] | llq -u [user_name] | |
Job hold | scontrol hold [job_id] | bstop [job_id] | qhold [job_id] | qhold [job_id] | llhold -r [job_id] | |
Job release | scontrol release [job_id] | bresume [job_id] | qrls [job_id] | qrls [job_id] | llhold -r [job_id] | |
Queue list | squeue | bqueues | qconf -sql | qstat -Q | llclass | |
Node list | sinfo -N scontrol show nodes | bhosts | qhost | pbsnodes -l | llstatus -L machine | |
Cluster status | sinfo | bqueues | qhost -q r | qstat -a | llstatus -L cluste | |
GUI | sview | xlsf OR xlsbatch | qmon | xpbsmon | xload | |
Environment | Slurm | LSF | SGE | PBS/Torque | LoadLeveler | |
Job ID | $SLURM_JOBID | $LSB_JOBID | $JOB_ID | $PBS_JOBID | $LOAD_STEP_ID | |
Submit Directory | $SLURM_SUBMIT_DIR | $LSB_SUBCWD | $SGE_O_WORKDIR | $PBS_O_WORKDIR | $LOADL_STEP_INITDIR | |
Submit Host | $SLURM_SUBMIT_HOST | $LSB_SUB_HOST | $SGE_O_HOST | $PBS_O_HOST | ||
Node List | $SLURM_JOB_NODELIST | $LSB_HOSTS/LSB_MCPU_HOST | $PE_HOSTFILE | $PBS_NODEFILE | $LOADL_PROCESSOR_LIST | |
Job Array Index | $SLURM_ARRAY_TASK_ID | $LSB_JOBINDEX | $SGE_TASK_ID | $PBS_ARRAYID | ||
Job Specification | Slurm | LSF | SGE | PBS/Torque | LoadLeveler | |
Script directive | #SBATCH | #BSUB | #$ | #PBS | #@ | |
Queue | -p [queue] | -q [queue] | -q [queue] | -q [queue] | class=[queue] | |
Node Count | -N [min[-max]] | -n [count] | N/A | -l nodes=[count] | node=[count] | |
CPU Count | -n [count] | -n [count] | -pe [PE] [count] | -l ppn=[count] OR -l mppwidth=[PE_count] | ||
Wall Clock Limit | -t [min] OR -t [days-hh:mm:ss] | -W [hh:mm:ss] | -l h_rt=[seconds] | -l walltime=[hh:mm:ss] | wall_clock_limit=[hh:mm:ss] | |
Standard Output FIle | -o [file_name] | -o [file_name] | -o [file_name] | -o [file_name] | output=[file_name] | |
Standard Error File | -e [file_name] | -e [file_name] | -e [file_name] | -e [file_name] | error=[File_name] | |
Combine stdout/err | (use -o without -e) | (use -o without -e) | -j yes | -j oe (both to stdout) OR -j eo (both to stderr) | ||
Copy Environment | --export=[ALL | NONE | variables] | -V | -V | environment=COPY_ALL | ||
Event Notification | --mail-type=[events] | -B or -N | -m abe | -m abe | notification=start|error| complete|never|always | |
Email Address | --mail-user=[address] | -u [address] | -M [address] | -M [address] | notify_user=[address] | |
Job Name | --job-name=[name] | -J [name] | -N [name] | -N [name] | job_name=[name] | |
Job Restart | --requeue OR --no-requeue | -r | -r [yes|no] | -r [y|n] | restart=[yes|no] | |
Working Directory | --workdir=[dir_name] | (submission directory) | -wd [directory] | N/A | initialdir=[directory] | |
Resource Sharing | --exclusive OR--shared | -x | -l exclusive | -l naccesspolicy=singlejob | node_usage=not_shared | |
Memory Size | --mem=[mem][M|G|T] OR --mem-per-cpu= [mem][M|G|T] | -M [MB] | -l mem_free=[memory][K|M|G] | -l mem=[MB] | requirements=(Memory >= [MB]) | |
Account to charge | --account=[account] | -P [account] | -A [account] | -W group_list=[account] | ||
Tasks Per Node | --tasks-per-node=[count] | (Fixed allocation_rule in PE) | -l mppnppn [PEs_per_node] | tasks_per_node=[count] | ||
CPUs Per Task | --cpus-per-task=[count] | |||||
Job Dependency | --depend=[state:job_id] | -w [done | exit | finish] | -hold_jid [job_id | job_name] | -d [job_id] | ||
Job Project | --wckey=[name] | -P [name] | -P [name] | |||
Job host preference | --nodelist=[nodes] AND/OR --exclude= [nodes] | -m [nodes] | -q [queue]@[node] OR -q [queue]@@[hostgroup] | |||
Quality Of Service | --qos=[name] | -l qos=[name] | ||||
Job Arrays | --array=[array_spec] | -J "name[array_spec]" | -t [array_spec] | -t [array_spec] | ||
Generic Resources | --gres=[resource_spec] -l [resource]=[value] | -l other=[resource_spec] | ||||
Licenses | --licenses=[license_spec] | -R "rusage[license_spec]" | -l [license]=[count] | |||
Begin Time | --begin=y-m-d[Th:m[:s]] | -b[[y:][m:]d:]h:m | -a [ymdhm] | -A "y-m-d h:m: s" |