Table of Contents |
---|
...
If your job does not use the system openmpi, or does not use MPI, then any desired core binding will need to be set up with whatever mechanism the software uses. Otherwise, there will be no core binding. Again, that may not be a major issue. If your job does not work well with HT then run on a number of cores equal to half of the number of slots requested and the OS scheduler will minimize contention.
new SGE utilities
While SoGE is very similar to previous versions of SGE there are some new utilities that people may find of interest. There are manual pages for each of these.
...
On previous UI HPC systems it was possible to briefly ssh to any compute node, before getting booted from that node if a registered job was not found. This was sufficient to run an ssh command, for instance, on any node. This is not the case for Argon. SSH connections to compute nodes will only be allowed if you have a registered job on that host. Of course, qlogin sessions will allow you to login to a node directly as well. Again, if you have a job running on a node you can ssh to that node in order to check status, etc. You can find the nodes of a job with the nodes-in-job
command mentioned above. We ask that you not do more than observe things while logged into the node as it may have shared jobs on it.
Queues and Policies
...
sortColumn | 1 |
---|---|
allowExport | true |
columnTypes | S,S,S,I,I |
...
(4) 56-core 128GB
(2) 32-core 64GB
...
ARROMA
...
Katharine Corum
...
BIGREDQ
...
Sara Mason
...
BIOLOGY
...
Matthew Brockman
...
BIOSTAT
...
(1) 56-core 256GB
(1) 64-core 192GB, not yet equipped with accelerators
...
120
...
Boyd Knosp
...
Boyd Knosp
...
CGRER
...
Jeremie Moen
...
Brad Carson
...
CLAS-INSTR
...
Brad Carson
...
(2) 40-core 192GB with 1080Ti accelerators
(One node with single, one node with two accelerators)
...
Mark Wilson
Brian Miller
...
CODBCB
...
(1) 40-core 384GB with (1)Titan V accelerator
...
Brad A Amendt
Xian Jin Xie
...
(10) 56-core 256GB
(11) 32-core 256GB
(10) 32-core 64GB
Note: Users are restricted to no more than three running jobs in the COE queue.
...
Matt McLaughlin
...
(2) 40-core 192GB with (4) Titan V accelerators
(2) 40-core 192GB with (4) 1080Ti accelerators
...
DARBROB
...
Benjamin Darbro
...
MF
...
Michael Flatte
...
FLUIDSLAB
...
Mark Wilson
Brian Miller
...
GEOPHYSICS
...
(3) 56-core 128GB
...
William Barnhart
...
Mark Wilson
Brian Miller
...
Mark Wilson
Brian Miller
...
Diana Kolbe
...
INFORMATICS
...
INFORMATICS-GPU
...
(2) 56-core 256GB with Titan V accelerators
(2) 40-core 192GB with (3) Titan V accelerators
...
INFORMATICS-HM-GPU
...
Todd Scheetz
...
Mark Wilson
Brian Miller
...
Jake Michaelson
...
Virginia Willour
...
Qihang Lin
...
MORL
...
William (Daniel) Walls
...
Mike Schnieders
...
Mark Wilson
Brian Miller
...
Mark Wilson
Brian Miller
...
Scott Baalrud
...
Mark Wilson
Brian Miller
...
UI-DEVELOP
...
(10) 32-core 64GB with (1) K20 accelerator
(5) 56-core 256GB with P100 accelerator
(1) 40-core 192GB with (1) 1080Ti accelerators
(2) 40-core 192GB with (4) 1080Ti accelerators
(4) 40-core 192GB with (4) Titan V accelerators
(1) 40-core 192GB with (2) Titan V accelerators
...
(19) 56-core 256GB
...
(174) 32-core 64GB
(115) 56-core 128GB
(2) 64-core 192GB
(49) 32-core 256GB
(154) 56-core 256GB
(7) 24-core 512GB
(2) 32-core 512GB
(42) 56-core 512GB
(10) 32-core 64GB with (1) K20 accelerator
(1) 32-core 256GB with (1) K20 accelerator
(2) 56-core 512GB with (1) K80 accelerator
(6) 56-core 256GB with (1) P100 accelerator
(5) 56-core 256GB with (2) P100 accelerators
(8) 56-core 512GB with (1) P100 accelerator
(2) 56-core 512GB with (2) P100 accelerators
(2) 56-core 512GB with (1) P40 accelerator
(3) 56-core 256GB with (1) Titan V accelerator
(4) 56-core 512GB with (1) Titan V accelerator
(1) 40-core 96GB with (4) Titan V accelerator
(6) 40-core 192GB with (4) Titan V accelerators
(2) 40-core 192GB with (3) Titan V accelerators
(2) 40-core 192GB with (2) Titan V accelerators
(1) 40-core 192GB with (1) Titan V accelerators
(7) 40-core 96GB with (4) 1080Ti accelerators
(4) 40-core 192GB with (4) 1080Ti accelerators
(1) 40-core 192GB with (2) 1080Ti accelerators
(1) 40-core 192GB with (1) 1080Ti accelerator
...
Haiming Chen
...
Craig Pryor
...
(2) 56-core 512GB with P100 accelerator
(1) 32-core 256GB with K20 accelerator
(1) 24-core 512GB
...
Bruce Ayati
...
Shizhong Han
...
A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into queues named UI or prefixed with UI-.
- UI → Default queue
- UI-HM→ request only for jobs that need more memory than can be met with the standard nodes.
- UI-MPI → MPI jobs; request only for jobs that can take advantage of multiple nodes.
- UI-GPU → Contains nodes with GPU accelerators; request only if job can use a GPU accelerator.
- UI-DEVELOP → Meant for small, short running job prototypes and debugging.
These queues are available to everyone who has an account on an HPC system. Since that is a fairly large user base there are limits placed on these shared queues. Also note that there is a limit of 50000 active (running and pending) jobs per user on the system.
...
(20) 56-core 256GB
(66) 32-core 64GB
...
(5) 56-core 512GB
(3) 24-core 512GB
...
UI-MPI
(56 slot minimum)
...
(19) 56-core 256GB
...
(10) 32-core 64GB with (1) K20 accelerator
(5) 56-core 256GB with P100 accelerator
(2) 40-core 192GB with (4) 1080Ti accelerators
(4) 40-core 192GB with (4) Titan V accelerators
(1) 40-core 192GB with (2) Titan V accelerators
...
Note that the number of slots available in the UI queue can vary depending on whether anyone has purchased a reservation of nodes. The UI queue is the default queue and will be used if no queue is specified. This queue is available to everyone who has an account on a UI HPC cluster system.
Info |
---|
Please use the UI-DEVELOP queue for testing new jobs at a smaller scale before committing many nodes to your job. |
In addition to the above, the HPC systems have some nodes that are not part of any investor queue. These are in the /wiki/spaces/hpcdocs/pages/76513448 and are used for node rentals and future purchases. The number of nodes for this purpose varies.