...
Table of Contents |
---|
General Description
The University of Iowa's Argon HPC system is the latest HPC system of the University of Iowa. It consists of 366 compute nodes running CentOS-7.4 Linux. was deployed in February, 2017. There are several compute node configurations,
...
40-core 96GB
40-core 192GB
64-core
...
192GB
...
64-core
...
384GB
...
64-core
...
768GB
...
80-core 96GB
...
80-core 192GB
80-core 384GB
...
80-core
...
The Argon cluster is split between two data centers,
- ITF → Information Technology Facility
- LC→ Lindquist Center
There are 22 machines with Nvidia P100 accelerators, 2 machines with Nvidia K80 accelerators, 2 machines with Nvidia P40 accelerators, 13 machines with 1080Ti accelerators, and 16 machines with Titan V accelerators. Most of the nodes are connected with the OmniPath high speed interconnect fabric.
Info |
---|
The Titan V is now considered as a supported configuration in GPU-capable compute nodes but is restricted to a single card per node. Staff have completed the qualification process for the 1080 Ti and concluded that it is not a viable solution to add to current Argon compute nodes. |
The Rpeak (theoretical Flops) is 385.0 TFlops, not including the accelerators, with 89.7 TB of memory. In addition, there are 2 login nodes of the Broadwell system architecture, with 256GB of memory each.
While on the backend Argon is a completely new architecture, the frontend should be very familiar to those who have used previous generation HPC systems at the University of Iowa. There are, however, a few key differences that will be discussed in this page.
Heterogeneity
While previous HPC cluster systems at UI have been very homogenous, the Argon HPC system has a heterogeneous mix of compute node types. In addition to the variability in the GPU accelerator types listed above, there are also differences in CPU architecture. We generally follow Intel marketing names, with the most important distinction being the AVX (Advanced Vector Extensions) unit on the processor. The following table lists the processors in increasing generational order.
...
Note that code must be optimized during compilation to take advantage of AVX instructions. The CPU architecture is important to keep in mind both in terms of potential performance and compatibility. For instance, code optimized for AVX2 instructions will not run on the Sandybridge/Ivybridge architecture because it only supports AVX, not AVX2. However, each successive generation is backward compatible so code optimized with AVX instructions will run on Haswell/Broadwell systems.
Info |
---|
More information on compiling is forthcoming. |
Hyperthreaded Cores (HT)
One important difference between Argon and previous systems is that Argon has Hyperthreaded processor cores turned on. Hyperthreaded cores can be thought of as splitting a single processor into two virtual cores, much as a Linux process can be split into threads. That oversimplifies it but if your application is multithreaded then hyperthreaded cores can potentially run the application more efficiently. For non-threaded applications you can think of any pair of hyperthreaded cores to be roughly equivalent to two cores at half the speed if both cores of the pair are in use. This can help ensure that the physical processor is kept busy for processes that do not always use the full capacity of a core. The reasons for enabling HT for Argon are to try to increase system efficiency on the workloads that we have observed. There are some thing to keep in mind as you are developing your workflows.
- For high throughput jobs the use of HT can increase overall throughput by keeping cores active as jobs come and go. These jobs can treat each HT core as a processor.
- For multithreaded applications, HT will provide more efficient handling of threads. You must make sure to request the appropriate number of job slots. Generally, the number of job slots requested should equal the number of cores that will be running.
- For non-threaded CPU bound processes that can keep a core busy all of the time, you probably want to only run one process per core, and not run processes on HT cores. This can be accomplished by taking advantage of the Linux kernel's ability to bind processes to cores. In order to minimize processes running on the HT cores of a machine make sure that only half of the total number of cores are used. See below for more details but requesting twice the number of job slots as the number of cores that will be used will accomplish this. A good example of this type of job is non-threaded MPI jobs, but really any non-threaded job.
Job Scheduler/Resource Manager
Like previous UI HPC systems, Argon uses SGE, although this version is based off of a slightly different code-base. If anyone is interested in the history of SGE there is an interesting writeup at History of Grid Engine Development. The version of SGE that Argon uses is from the Son of Grid Engine project. For the most part this will be very familiar to people who have used previous generations of UI HPC systems. One thing that will look a little different is the output of the qhost command. This will show the CPU topology.
No Format |
---|
qhost -h argon-compute-1-01
HOSTNAME ARCH NCPU NSOC NCOR NTHR LOAD MEMTOT MEMUSE SWAPTO SWAPUS
----------------------------------------------------------------------------------------------
global - - - - - - - - - -
argon-compute-1-01 lx-amd64 56 2 28 56 0.03 125.5G 1.1G 2.0G 0.0 |
As you can see that shows the number of cpus (NCPU), the number of CPU sockets (NSOC), the number of cores (NCOR) and the number of threads (NTHR). This information could be important as you plan jobs but it essentially reflects what was said in regard to HT cores. Note that all argon nodes have the same processor topology. SGE uses the concept of job slots which serve as a proxy for the number of cores as well as the amount of memory on a machine. Job slots are one of the resources that is requested when submitting a job to the system. As a general rule, the number of job slots requested should be equal to or greater than the number of processes/threads that will actually consume resources. The parallel environment to request an entire node on Argon is called Xcpn
where X=number of slots
. For example, to request one node from the 56 slot machines you would request
No Format |
---|
qsub -pe 56cpn 56 |
More nodes would be requested by specifying a slot count that is a multiple of 56. So for 2 nodes
No Format |
---|
qsub -pe 56cpn 112 |
and so on.
The available Xcpn
parallel environments on Argon are:
- 56cpn
- 40cpn
You will need to be aware of the approximate amount of memory per job slot when setting up jobs if your job uses a significant amount of memory. The actual amount will vary due to OS overhead but the values below can be used for planning purposes.
...
columnTypes | I,H,I |
---|
...
Using the Basic Job Submission and Advanced Job Submission pages as a reference, how would one submit jobs taking HT into account? For single process high throughput type jobs it probably does not matter, just request one slot per job. For multithreaded or MPI jobs, request one job slot per thread or process. So if your application runs best with 4 threads then request something like the following.
No Format |
---|
qsub -pe smp 4 |
That will run on two physical cores and two HT cores. For non-threaded processes that are also CPU bound you can avoid running on HT cores by requesting 2x the number of slots as cores that will be used. So, if your process is a non-threaded MPI process, and you want to run 4 MPI ranks, your job submission would be something like the following.
No Format |
---|
qsub -pe smp 8 |
and your job script would contain an mpirun command similar to
No Format |
---|
mpirun -np 4 ... |
That would run the 4 MPI ranks on physical cores and not HT cores. Note that this will work for non-MPI jobs as well. If you have a non-threaded process that you want to ensure runs on an actual core, you could use the same 2x slot request.
No Format |
---|
qsub -pe smp 2 |
Note that if you do not use the above strategy then it is possible that your job process will share cores with other job processes. That may be okay, and preferred for high throughput jobs, but is something to keep in mind. It is especially important to keep this in mind when using the orte
parallel environment. There is more discussion on the orte
parallel environment on the Advanced Job Submission page. In short, that parallel environment is used in node sharing scenarios, which implies potential core sharing as well. For MPI jobs, that is probably not what you want. As on previous systems, there is a parallel environment (56cpn) for requesting entire nodes. This is especially useful for MPI jobs to ensure the best performance.
For MPI jobs, the system provided openmpi will not bind processes to cores by default, as would be the normal default for openmpi. This is set this way to avoid inadvertently oversubcribing processes on cores. In addition, the system openmpi settings will map processes by socket. This should give a good process distribution in all cases. However, if you wish to use less than 28 processes per node in an MPI job then you may want to map by node to get the most even distribution of processes across nodes. You can do that with the --map-by node
option flag to mpirun.
No Format |
---|
mpirun --map-by node ... |
If you wish to control mapping and binding in a more fine-grained manner, the mapping and binding parameters can be overridden with parameters to mpirun
. Openmpi provides many options for fine grained control of process layout. The options that are set by default should be good in most cases but can be overridden with the openmpi options for
- mapping → controls how processes are distributed across processing units
- binding → binds processes to processing units
- ranking → assigns MPI rank values to processes
See the mpirun manual page,
man mpirun
for more detailed information. The defaults should be fine for most cases but if you override them keep the topology in mind.
- each node has 2 processor sockets
- each processor socket has 14 processor cores
- each processor core has 2 hardware threads (HT)
If you set your own binding, for instance --bind-to core,
be aware that the number of cores is half of the number of total HT processors. Note that core binding in and of itself may not really boost performance much. Generally speaking, if you want to minimize contention with hardware threads then simply request twice the number of slots than cores your job will use. Even if the processes are not bound to cores, the OS scheduler will do a good job of minimizing contention.
If your job does not use the system openmpi, or does not use MPI, then any desired core binding will need to be set up with whatever mechanism the software uses. Otherwise, there will be no core binding. Again, that may not be a major issue. If your job does not work well with HT then run on a number of cores equal to half of the number of slots requested and the OS scheduler will minimize contention.
new SGE utilities
While SoGE is very similar to previous versions of SGE there are some new utilities that people may find of interest. There are manual pages for each of these.
- qstatus: Reformats output of qstat and can calculate job statistics.
- dead-nodes: This will tell you what nodes are not physically participating in the cluster.
- idle-nodes: This will tell you what nodes do not have any activity on them.
- busy-nodes: This will tell you what nodes are running jobs.
- nodes-in-job: This is probably the most useful. Given a job ID it will list the nodes that are in use for that particular job.
SSH to compute nodes
On previous UI HPC systems it was possible to briefly ssh to any compute node, before getting booted from that node if a registered job was not found. This was sufficient to run an ssh command, for instance, on any node. This is not the case for Argon. SSH connections to compute nodes will only be allowed if you have a registered job on that host. Of course, qlogin sessions will allow you to login to a node directly as well. Again, if you have a job running on a node you can ssh to that node in order to check status, etc. You can find the nodes of a job with the nodes-in-job
command mentioned above. We ask that you not do more than observe things while logged into the node as it may have shared jobs on it.
Software Packages
While there are many software applications installed from RPM packages, many commonly used packages, and their dependencies, are built from source. See the Argon Software List to view the packages and versions installed. Note that this list does not include all of the dependencies that are installed, which will consist of newer versions than those installed via RPM. Use of these packages is facilitated through the use of environment modules, which will set up the appropriate environment for the application, including loading required dependencies. Some packages like Perl, Ruby, R and Python, are extendable. We build a set of extensions based on commonly used and requested extensions so loading modules for those will load all of the extensions, and dependencies needed for the core package as well as the extensions. The number of extensions installed, particularly for Python and R is too large to list here. You can use the standard tools of those packages to determine what extensions are installed.
Environment Modules
Like previous generation UI HPC systems, Argon uses environment modules for managing the shell environment needed by software packages. Argon uses LMod rather than the TCL modules used in previous generation UI HPC systems. More information about Lmod can be found in the Lmod: A New Environment Module System. Briefly, Lmod provides improvements over TCL modules in some key ways. One is that Lmod will automatically load and/or swap dependent environment modules when higher level modules are changed in the environment. It can also temporarily deactivate modules if a suitable alternative is not found, and can reactivate those modules when the environment changes back. We are not using all of the features that Lmod is capable of so the modules behavior should be very close to previous systems but with a more robust way of handling dependencies.
Lmod provides a mechanism to save a set of modules that can then be restored. For those who wish to load modules at shell startup this provides a better mechanism than calling individual module files. The reasons are that
- Only one command is needed
- The same command can be used at any time
- Restoring a module set runs a
module purge
which will ensure that the environment, at least the part controlled by modules, is predictable.
To use this, simply load the modules that you want to have loaded as a set. Then run the following command.
No Format |
---|
module save |
That will save the loaded modules as the default set. To restore that run
No Format |
---|
module restore |
That command could then be put in your shell initialization file. In addition to saving/restoring a default set you can also assign a name to the collection.
No Format |
---|
module save mymodules
module restore mymodules |
There is also a technical reason to use the module save/restore feature as opposed to individual modules that involves how the LD_LIBRARY_PATH environment variable is handled at shell initialization.
Expand | ||
---|---|---|
| ||
One of the things that environment modules sets up is the |
Other than the above items, and some other additional features, the environment modules controlled by Lmod should behave very similarly to the TCL modules on previous UI HPC systems.
Setting default shell
Unix attributes are now available in the campus wide Active Directory Service and Argon makes use of those. One of those attributes is the default Unix shell. This can be set via the following tool: Set Login Shell - Conch. Most people will want the shell set to /bin/bash
so that would be a good choice if you are not sure. For reference, previous generation UI HPC systems set the shell to /bin/bash
for everyone, unless requested otherwise. We recommend that you check your shell setting via the Set Login Shell - Conch tool and set it as desired before logging in the first time. Note that changes to the shell setting may take up to 24 hours to become effective on Argon.
Queues and Policies
...
sortColumn | 1 |
---|---|
allowExport | true |
columnTypes | S,S,S,I,I |
...
ARROMA
...
Katharine Corum
...
BIGREDQ
...
Sara Mason
...
BIOLOGY
...
Matthew Brockman
...
BIOSTAT
...
(1) 56-core 256GB with P100 accelerator
(1) 64-core 192GB, not yet equipped with accelerators
...
120
...
Boyd Knosp
...
Boyd Knosp
...
CGRER + LMOS
...
Jeremie Moen
...
Brad Carson
...
CLAS-INSTR
...
Brad Carson
...
(2) 40-core 192GB with 1080Ti accelerators
(One node with single, one node with two accelerators)
...
Mark Wilson
Brian Miller
...
(10) 56-core 256GB
Note: Users are restricted to no more than
three running jobs in the COE queue.
...
Matt McLaughlin
...
(2) 40-core 192GB with (4) Titan V accelerators
(2) 40-core 192GB with (4) 1080Ti accelerators
...
DARBROB
...
Benjamin Darbro
...
MF
...
Michael Flatte
...
FLUIDSLAB
...
Mark Wilson
Brian Miller
...
GEOPHYSICS
...
(3) 56-core 128GB
...
William Barnhart
...
Mark Wilson
Brian Miller
...
Mark Wilson
Brian Miller
...
Diana Kolbe
...
INFORMATICS
...
INFORMATICS-GPU
...
(2) 56-core 256GB with Titan V accelerators
(2) 40-core 192GB with (3) Titan V accelerators
...
INFORMATICS-HM-GPU
...
Todd Scheetz
...
Mark Wilson
Brian Miller
...
Jake Michaelson
...
Virginia Willour
...
Qihang Lin
...
MORL
...
William (Daniel) Walls
...
Mike Schnieders
...
Mark Wilson
Brian Miller
...
Mark Wilson
Brian Miller
...
Scott Baalrud
...
Mark Wilson
Brian Miller
...
UI-DEVELOP
...
(4) 56-core 256GB with P100 accelerator
(2) 40-core 192GB with (4) 1080Ti accelerators
(4) 40-core 192Gb with (4) Titan V accelerators
...
(19) 56-core 256GB
...
(115) 56-core 128GB
(154) 56-core 256GB
(7) 56-core 256GB with (1) P100 accelerator
(5) 56-core 256GB with (2) P100 accelerators
(2) 56-core 256GB with (1) Titan V accelerator
(42) 56-core 512GB
(2) 56-core 512GB with (1) K80 accelerator
(9) 56-core 512GB with (1) P100 accelerator
(1) 56-core 512GB with (2) P100 accelerators
(2) 56-core 512GB with (1) P40 accelerator
(4) 56-core 512GB with (1) Titan V accelerator
(6) 40-core 192GB with (4) Titan V accelerators
(2) 40-core 192GB with (3) Titan V accelerators
(1) 40-core 192GB with (2) Titan V accelerators
(4) 40-core 192GB with (4) 1080Ti accelerators
(1) 40-core 192GB with (2) 1080Ti accelerators
(1) 40-core 192GB with (1) 1080Ti accelerator
(1) 40-core 96GB with (1) Titan V accelerator
(7) 40-core 96GB with (4) 1080Ti accelerators
(1) 64-core 192GB (No accelerators at this time, will be equipped with multiple)
...
Haiming Chen
...
Craig Pryor
...
A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into queues named UI or prefixed with UI-.
- UI → Default queue
- UI-HM→ 56-core 512GB nodes; request only for jobs that need more memory than can be met with the standard nodes.
- UI-MPI → MPI jobs; request only for jobs that can take advantage of multiple nodes.
- UI-GPU → Contains nodes with GPU accelerators; request only if job can use a GPU accelerator.
- UI-DEVELOP → Meant for small, short running job prototypes and debugging.
These queues are available to everyone who has an account on an HPC system. Since that is a fairly large user base there are limits placed on these shared queues. Also note that there is a limit of 10000 active (running and pending) jobs per user on the system.
...
(20) 56-core 256GB
...
(5) 56-core 512GB
...
UI-MPI
(56 slot minimum)
...
(20) 56-core 256GB
...
(4) 56-core 256GB with P100 accelerator
(2) 40-core 192GB with (4) 1080Ti accelerators
(4) 40-core 192Gb with (4) Titan V accelerators
...
Note that the number of slots available in the UI queue can vary depending on whether anyone has purchased a reservation of nodes. The UI queue is the default queue and will be used if no queue is specified. This queue is available to everyone who has an account on a UI HPC cluster system.
Info |
---|
Please use the UI-DEVELOP queue for testing new jobs at a smaller scale before committing many nodes to your job. |
In addition to the above, the HPC systems have some nodes that are not part of any investor queue. These are in the /wiki/spaces/hpcdocs/pages/76513448 and are used for node rentals and future purchases. The number of nodes for this purpose varies.
Resource requests
The Argon cluster is a heterogeneous cluster, meaning that it consists of different node types with varying amounts and types of resources. There are many resources that SGE keeps track of and most of them can be used in job submissions. However, the resource designations for machines based on CPU type, memory amount, and GPU are more likely to be used in practice. Note that there can be very different performance characteristics for different types of GPUs and CPUs. As noted above, the Argon cluster is split between two data centers,
- ITF → Information Technology Facility
- LC→ Lindquist Center
As we expand the capacity of the cluster the datacenter selection could be important for multi node jobs, such as those that use MPI, that require a high speed interconnect fabric. The compute nodes used in those job would need to be in the same data center.
Info |
---|
Currently, all nodes with the OmniPath fabric are located in the LC datacenter. All nodes that have the |
...
hm
deprecated
...
- broadwell
- skylake_silver
...
- ITF
- LC
...
- none*
- omnipath
* no high speed interconnect fabric
...
p40
...
GPU resources
If you wish to use a compute node that contains a GPU then it must be explicitly requested in some form. The table above lists the Boolean resources for selecting a specific GPU, or any one of the types, with the generic gpu
resource.
For example, if you run a job in the all.q queue and want to use a node with a GPU, but do not care which type,
qsub -l ngpus=1
If you specifically wanted to use a node with a P100 GPU,
qsub -l gpu_p100=true
or use the shortcut,
qsub -l p100=true
In all cases, requesting any of the GPU Boolean resources will set the ngpus
resource value to 1 to signify to the scheduler that 1 GPU device is required. If your job needs more than one GPU than that can be specified explicitly with the ngpus
resource. For example,
qsub -l ngpus=2
...
In addition to the ngpus
resource there some other non-Boolean resources for GPU nodes that could be useful to you. With the exception of requesting free memory on a GPU device these are informational.
...
number of CUDA GPUs on the host
...
number of OpenCL GPUs on the host
...
total number of GPUs on the host
...
free memory on CUDA GPU N
...
number of processes on CUDA GPU N
...
maximum clock speed of CUDA GPU N (in MHz)
...
gpu.cuda.N.util
...
compute utilization of CUDA GPU N (in %)
...
maximum clock speed of OpenCL GPU N (in MHz)
...
global memory of OpenCL GPU N
...
semi-colon-separated list of GPU model names
...
For example, to request a node with at least 2G of memory available on the first GPU device:
qsub -l gpu.cuda.0.mem_free=2G
...
768GB
80-core 1.5TB
112-core 256GB
112-core 512GB
112-core 1024GB
112-core 1.5TB
128-core 256GB
128-core 512GB
128-core 1TB
128-core 1.5TB
The Argon cluster is split between two data centers,
ITF → Information Technology Facility
LC→ Lindquist Center
Most of the nodes in the LC datacenter are connected with the OmniPath high speed interconnect fabric. The nodes in the ITF data center are connected with a Mellanox Infiniband EDR fabric. There are two separate fabrics at ITF which do not interconnect. We refer to each of these fabrics as an island.
There are many machines with varying types of GPU accelerators:
21 machines with Nvidia P100 accelerators
2 machines with Nvidia K80 accelerators
2 machines with Nvidia P40 accelerators
17 machines with 1080Ti accelerators
19 machines with Titan V accelerators
14 machines with V100 accelerators
38 machines with 2080Ti accelerators
1 machine with RTX8000 accelerators
7 machines with A100 accelerators
5 machines with 4 A40 accelerators each
2 machines with 4 L40S accelerators each
1 machine with 4 L4 accelerators
Heterogeneity
While previous HPC cluster systems at UI have been very homogenous, the Argon HPC system has a heterogeneous mix of compute node types. In addition to the variability in the GPU accelerator types listed above, there are also differences in CPU architecture. We generally follow Intel marketing names, with the most important distinction being the AVX (Advanced Vector Extensions) unit on the processor. The following table lists the processors in increasing generational order.
Architecture | AVX level | Floating Point Operations per cycle |
---|---|---|
Haswell | AVX2 | 16 |
Skylake Silver | AVX512 | 16 (1) AVX unit per processor core |
Skylake Gold | AVX512 | 32 (2) AVX units per processor core |
Cascadelake Gold | AVX512 | 32 |
Sapphire Rapids Gold | AVX512 |
Note that code must be optimized during compilation to take advantage of AVX instructions. The CPU architecture is important to keep in mind both in terms of potential performance and compatibility. For instance, code optimized for AVX512 instructions will not run on the Haswell/Broadwell architecture because it only supports AVX2, not AVX512. However, each successive generation is backward compatible so code optimized with AVX2 instructions will run on Skylake/Cascadelake systems.
Hyper Threaded Cores (HT)
One important difference between Argon and previous systems is that Argon has Hyper Threaded processor cores turned on. Hyper Threaded cores can be thought of as splitting a single processor into two virtual cores, much as a Linux process can be split into threads. That oversimplifies it but if your application is multithreaded then Hyper Threaded cores can potentially run the application more efficiently. For non-threaded applications you can think of any pair of Hyper Threaded cores to be roughly equivalent to two cores at half the speed if both cores of the pair are in use. Again, that is an over simplification, but the main point is that CPU bound processes perform better when not sharing a CPU core. Hyper Threaded cores can help ensure that the physical processor is kept busy for processes that do not always use the full capacity of a core. The reason for enabling HT for Argon is to try to increase system efficiency on the workloads that we have observed. There are some things to keep in mind as you are developing your workflows.
For high throughput jobs the use of HT can increase overall throughput by keeping cores active as jobs come and go. These jobs can treat each HT core as a processor.
For multithreaded applications, HT will provide more efficient handling of threads. You must make sure to request the appropriate number of job slots. Generally, the number of job slots requested should equal the number of cores that will be running.
For non-threaded CPU bound processes that can keep a core busy all of the time, you probably want to only run one process per core, and not run processes on HT cores. This can be accomplished by taking advantage of the Linux kernel's ability to bind processes to cores. In order to minimize processes running on the HT cores of a machine make sure that only half of the total number of cores are used. See below for more details but requesting twice the number of job slots as the number of cores that will be used will accomplish this. A good example of this type of job is non-threaded MPI jobs, but really any non-threaded job. If your job script is written in
bash
syntax then you can use the$NSLOTS
SGE variable as follows, using mpirun as an example:Code Block mpirun -np $(($NSLOTS/2)) ...
Job Scheduler/Resource Manager
Like previous UI HPC systems, Argon uses SGE, although this version is based off of a slightly different code-base. If anyone is interested in the history of SGE there is an interesting write up at History of Grid Engine Development. The version of SGE that Argon uses is from the Son of Grid Engine project. For the most part this will be very familiar to people who have used previous generations of UI HPC systems. One thing that will look a little different is the output of the qhost command. This will show the CPU topology.
Code Block |
---|
qhost -h argon-compute-1-01
HOSTNAME ARCH NCPU NSOC NCOR NTHR LOAD MEMTOT MEMUSE SWAPTO SWAPUS
----------------------------------------------------------------------------------------------
global - - - - - - - - - -
argon-compute-1-01 lx-amd64 56 2 28 56 0.03 125.5G 1.1G 2.0G 0.0 |
As you can see that shows the number of cpus (NCPU), the number of CPU sockets (NSOC), the number of cores (NCOR) and the number of threads (NTHR). This information could be important as you plan jobs but it essentially reflects what was said in regard to HT cores.
You will need to be aware of the approximate amount of memory per job slot when setting up jobs if your job uses a significant amount of memory. The actual amount will vary due to OS overhead but the values below can be used for planning purposes.
Table plus | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Using the Basic Job Submission and Advanced Job Submission pages as a reference, how would one submit jobs taking HT into account? For single process high throughput type jobs it probably does not matter, just request one slot per job. For multithreaded or MPI jobs, request one job slot per thread or process. So if your application runs best with 4 threads then request something like the following.
Code Block |
---|
qsub -pe smp 4 |
That will run on two physical cores and two HT cores. For non-threaded processes that are also CPU bound you can avoid running on HT cores by requesting 2x the number of slots as cores that will be used. So, if your process is a non-threaded MPI process, and you want to run 4 MPI ranks, your job submission would be something like the following.
Code Block |
---|
qsub -pe smp 8 |
and your job script would contain an mpirun command similar to
Code Block |
---|
mpirun -np 4 ... |
That would run the 4 MPI ranks on physical cores and not HT cores. Note that this will work for non-MPI jobs as well. If you have a non-threaded process that you want to ensure runs on an actual core, you could use the same 2x slot request.
Code Block |
---|
qsub -pe smp 2 |
Note that if you do not use the above strategy then it is possible that your job process will share cores with other job processes. That may be okay, and preferred for high throughput jobs, but is something to keep in mind. It is especially important to keep this in mind when using the orte
parallel environment. There is more discussion on the orte
parallel environment on the Advanced Job Submission page. In short, that parallel environment is used in node sharing scenarios, which implies potential core sharing as well. For MPI jobs, that is probably not what you want. As on previous systems, there is a parallel environment (Xcpn, where X is the number of cores per node) for requesting entire nodes. This is especially useful for MPI jobs to ensure the best performance. Note that there are some additional parallel environments that are akin to the smp and Xcpn ones but are specialized for certain software packages. These are
gaussian-sm and gaussian_lindaX, where X is the number of cores per node
turbomole_mpiX, where X is the number of cores per node
wien2k-sm and wien2k_mpiX, where X is the number of cores per node
For MPI jobs, the system provided openmpi will not bind processes to cores by default, as would be the normal default for openmpi. This is set this way to avoid inadvertently oversubscribing processes on cores. In addition, the system openmpi settings will map processes by socket. This should give a good process distribution in all cases. However, if you wish to use a number of processes less than half of the slots per node in an MPI job then you may want to map by node to get the most even distribution of processes across nodes. You can do that with the --map-by node
option flag to mpirun.
Code Block |
---|
mpirun --map-by node ... |
If you wish to control mapping and binding in a more fine-grained manner, the mapping and binding parameters can be overridden with parameters to mpirun
. Openmpi provides many options for fine grained control of process layout. The options that are set by default should be good in most cases but can be overridden with the openmpi options for
mapping → controls how processes are distributed across processing units
binding → binds processes to processing units
ranking → assigns MPI rank values to processes
See the mpirun manual page,
man mpirun
for more detailed information. The defaults should be fine for most cases but if you override them keep the topology in mind.
each node has 2 processor sockets
each processor core has 2 hardware threads (HT)
If you set your own binding, for instance --bind-to core,
be aware that the number of cores is half of the number of total HT processors. Note that core binding in and of itself may not really boost performance much. Generally speaking, if you want to minimize contention with hardware threads then simply request twice the number of slots than cores your job will use. Even if the processes are not bound to cores, the OS scheduler will do a good job of minimizing contention.
If your job does not use the system openmpi, or does not use MPI, then any desired core binding will need to be set up with whatever mechanism the software uses. Otherwise, there will be no core binding. Again, that may not be a major issue. If your job does not work well with HT then run on a number of cores equal to half of the number of slots requested and the OS scheduler will minimize contention.
new SGE utilities
While SoGE is very similar to previous versions of SGE there are some new utilities that people may find of interest. There are manual pages for each of these.
qstatus: Reformats output of qstat and can calculate job statistics.
dead-nodes: This will tell you what nodes are not physically participating in the cluster.
idle-nodes: This will tell you what nodes do not have any activity on them.
busy-nodes: This will tell you what nodes are running jobs.
nodes-in-job: This is probably the most useful. Given a job ID it will list the nodes that are in use for that particular job.
SSH to compute nodes
On previous UI HPC systems it was possible to briefly ssh to any compute node, before getting booted from that node if a registered job was not found. This was sufficient to run an ssh command, for instance, on any node. This is not the case for Argon. SSH connections to compute nodes will only be allowed if you have a registered job on that host. Of course, qlogin sessions will allow you to login to a node directly as well. Again, if you have a job running on a node you can ssh to that node in order to check status, etc. You can find the nodes of a job with the nodes-in-job
command mentioned above. We ask that you not do more than observe things while logged into the node as it may have shared jobs on it.