Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Can email outside of UI domains now.

Table of Contents

The HPC cluster systems system use the Sun Grid Engine (SGE) queue scheduler system. The feature of a queue scheduler system that users interact with the most is that of job submission. The manual pages for SGE are very good and should be referred to for details. For this particular topic the qsub manual page is the authoritative source.

No Format
man qsub

This document provides a brief introduction to the most common options that might be used to submit jobs to the SGE system. It will focus on single processor jobs as that is the most basic case, but not necessarily the most common. Details on submission of parallel jobs is covered in Advanced Job Submission.

...

Code Block
languagebash
titleExample job script, myscript.job
#!/bin/sh
# This is a very simple example job script
/Users/jdoe/my_program


Info

The line ending string for text files created on Windows systems is different than that of Linux systems. It is best to create your scripts on the cluster but if you create them on a Windows machine and copy them then you will need to convert the line endings. Run the following on your script to convert it:

dos2unix script

Submitting the job

To run the above program on the cluster it must first be submitted to SGE. This is done with the qsub command.

...

That will submit the job with all of the default options. For a parallel job, a parallel environment would need to be specified. This will be covered in more detail in Advanced Job Submission but an example would look like

No Format
qsub -pe smp1smp 12 myscript.job

The default queue is set to be the UI queue, which has a 25 running jobs per user limit on Helium and a 10 running jobs per user limit on Neon. In addition, Neon has a there is high memory queue (UI-HM) with a 4 running jobs per user limitfor large memory jobs. If you have many single processor jobs to run it would may be better to submit them to the all.q queue, which has no limit, but is /wiki/spaces/hpcdocs/pages/76513448 to the other queues.

...

The following are some common options for the qsub command. For information on other options for more complex submissions, check the man pages with the command "man qsub".

 qsub optionDescription 
-VThis imports your current environment to the job. This is set by default. As such, it does not have to be specified but is good to know about.
-N [name]The name of the job. Make sure this makes sense to you. 
-l h_rt=hr:min:secMaximum walltime for this job. You may want to specify this if you think your job may run out of control.
-l h_vmem=bytes

You can specify a unit as well. For example,

-l h_vmem=2G

An appropriate value will be set for your job if an entire node is not requested.

-
r [y,n]Should this job be re-runnable (default n)-
cwdDetermines whether the job will be executed from the current working directory. If not specified, the job will be run from
the user's
your home directory.
-SSpecify the shell to use when
running
interpreting the job script.
-e [path]Name of a file or directory for standard error.
-o [path]Name of a file or directory for standard output.
-j [y,n]Merge the standard error stream into the standard output stream.
-pe [name] [n]

Parallel environment name and number of slots (cores).

-M email addressSet the email address to receive email about jobs.
This will most likely be your University of Iowa email address.
Separate with comma if more than one is specified
-m b|e|a|s|n,...

Specify when to send an email message

'b'     Mail is sent at the beginning of the job.

'e'     Mail is sent at the end of the job.

'a'     Mail is sent when the job is aborted.

's'     Mail is sent when the job is suspended.

'n'     No mail is sent.

Info

The "mail when job is suspended" option does not currently work.


Note

For multi-node jobs, the CPU and memory accounting info in the job email is only for the

master

primary queue host. It does not account for the CPU and memory of the

slave

secondary nodes. That information is in the job accounting record and can be obtained via the qacct command.


Memory request

If you need a certain amount of memory to be available for your computation to start, you can request that with a resource request.

...

It is often necessary or desired to redirect the standard output and standard error streams to files. This can be accomplished with the typical shell redirection calls but SGE also provides a mechanism for capturing the stdout and stderror stderr streams. By default, the standard error is written to a file named $JOB_NAME.e$JOB_ID and the standard output is written to a file called $JOB_NAME.o$JOB_ID. These can be set via the -e and -o flags to qsub, respectively.

...

Where n=the lowest index number, m=the highest index number, and s=the step size. m and s are optional, which means that you could enter a single number (n), a simple range (n-m), or a range with step size (n-m:s).

The index number can be referenced within the script with the variable SGE_TASK_ID. There must be something mapping to the index number to make this useful. So, for example, say there are a set of files named, input1, input2, input3, input4, input5, input6, input7, input8 and render.job contains the following:

Code Block
languagebash
titleExample referencing task ID
#!/bin/sh
# Example showing how task arrays work
render input$SGE_TASK_ID

The input file rendered will be the one corresponding to what the task ID is for each task created. So, for example, if you wanted to run the script render.job 4 times, processing the files input2, input4, input6, and input8, you would enter:

No Format
qsub -t 2-8:2 render.job

render.sh would then be run 4 times, each with the default allocation of resources, with the input file corresponding to the basename + index numberSee Array Jobs for more information.

Exit status

Every process, and therefore every job, has an exit status. If the job completed normally then the exit status is that of the computation process. However, if the job does not complete normally then a value of 128 is added to the exit status of the command. If the command exited due to receiving a signal then the value of the signal is added to 128. This would be common for jobs running in the all.q queue when a job is evicted. A TERM signal is sent to a job when it is evicted. The numerical value of the TERM signal is 15 so the exit status of the job would be 128 + 15 = 143. Note that in some cases a TERM signal is not sufficient to remove a job and a follow up KILL signal will have to be sent. The job exit status would then be 128 + 9 = 137. Another case where a job will exit with status 143 is when memory limits are hit. So if your job has an exit status of 143 and it was not running in the all.q queue then it probably hit the memory limit. This can be further confirmed by examining the accounting record of the job with qacct.