Table of Contents
The HPC cluster systems system use the Sun Grid Engine (SGE) queue scheduler system. The feature of a queue scheduler system that users interact with the most is that of job submission. The manual pages for SGE are very good and should be referred to for details. For this particular topic the qsub manual page is the authoritative source.
No Format |
---|
man qsub |
This document provides a brief introduction to the most common options that might be used to submit jobs to the SGE system. It will focus on single processor jobs as that is the most basic case, but not necessarily the most common. Details on submission of parallel jobs is covered in Advanced Job Submission.
...
Code Block | ||||
---|---|---|---|---|
| ||||
#!/bin/sh # This is a very simple example job script /Users/jdoe/my_program |
Info |
---|
The line ending string for text files created on Windows systems is different than that of Linux systems. It is best to create your scripts on the cluster but if you create them on a Windows machine and copy them then you will need to convert the line endings. Run the following on your script to convert it:
|
Submitting the job
To run the above program on the cluster it must first be submitted to SGE. This is done with the qsub command.
...
qsub option | Description | ||||
---|---|---|---|---|---|
-V | This imports your current environment to the job. This is set by default. As such, it does not have to be specified but is good to know about. | ||||
-N [name] | The name of the job. Make sure this makes sense to you. | ||||
-l h_rt=hr:min:sec | Maximum walltime for this job. You may want to specify this if you think your job may run out of control. | ||||
-l h_vmem=bytes | You can specify a unit as well. For example, -l h_vmem=2G An appropriate value will be set for your job if an entire node is not requested. | -r [y,n] | Should this job be re-runnable (default n) | ||
-cwd | Determines whether the job will be executed from the current working directory. If not specified, the job will be run from the user's your home directory. | ||||
-S | Specify the shell to use when interpreting the job script. | ||||
-e [path] | Name of a file or directory for standard error. | ||||
-o [path] | Name of a file or directory for standard output. | ||||
-j [y,n] | Merge the standard error stream into the standard output stream. | ||||
-pe [name] [n] | Parallel environment name and number of slots (cores). | ||||
-M email address | Set the email address to receive email about jobs. This will most likely be your University of Iowa email address.Separate with comma if more than one is specified | ||||
-m b|e|a|s|n,... | Specify when to send an email message 'b' Mail is sent at the beginning of the job. 'e' Mail is sent at the end of the job. 'a' Mail is sent when the job is aborted. 's' Mail is sent when the job is suspended. 'n' No mail is sent.
|
...
Where n=the lowest index number, m=the highest index number, and s=the step size. m and s are optional, which means that you could enter a single number (n), a simple range (n-m), or a range with step size (n-m:s).
The index number can be referenced within the script with the variable SGE_TASK_ID. There must be something mapping to the index number to make this useful. So, for example, say there are a set of files named, input1, input2, input3, input4, input5, input6, input7, input8 and render.job contains the following:
Code Block | ||||
---|---|---|---|---|
| ||||
#!/bin/sh
# Example showing how task arrays work
render input$SGE_TASK_ID |
The input file rendered will be the one corresponding to what the task ID is for each task created. So, for example, if you wanted to run the script render.job 4 times, processing the files input2, input4, input6, and input8, you would enter:
No Format |
---|
qsub -t 2-8:2 render.job |
render.sh would then be run 4 times, each with the default allocation of resources, with the input file corresponding to the basename + index numberSee Array Jobs for more information.
Exit status
Every process, and therefore every job, has an exit status. If the job completed normally then the exit status is that of the computation process. However, if the job does not complete normally then a value of 128 is added to the exit status of the command. If the command exited due to receiving a signal then the value of the signal is added to 128. This would be common for jobs running in the all.q queue when a job is evicted. A TERM signal is sent to a job when it is evicted. The numerical value of the TERM signal is 15 so the exit status of the job would be 128 + 15 = 143. Note that in some cases a TERM signal is not sufficient to remove a job and a follow up KILL signal will have to be sent. The job exit status would then be 128 + 9 = 137. Another case where a job will exit with status 143 is when memory limits are hit. So if your job has an exit status of 143 and it was not running in the all.q queue then it probably hit the memory limit. This can be further confirmed by examining the accounting record of the job with qacct.