Nodes on Argon are separated into 3 types of queues:
Investor queues: nodes purchased by investors. Access to these queues is managed by the investors and their delegates.
UI queues: centrally funded nodes which are available to everyone who has an HPC account.
all.q queue: cluster wide queue
To request access to an investor queue, please contact the queue manager listed below. |
Queue,Node Description,Queue Manager,Slots,Total memory (GB) AMCS,(1) 112-core 256G,Laurent Jay,112,256 AML-HM,"(2) 128-core 1024G (2) 112-core 1.5T (1) 80-core 1.5T",Aaron Miller,560,6548.0 ARROMA-80,(6) 80-core 192G,Jun Wang,480,1152 ARROMA-Analysis,(2) 80-core 768G,Jun Wang,160,1536 ARROMA-GPU,"(1) 112-core 1024G w/ (4) NVIDIA A30 (1) 128-core 1024G w/ (4) NVIDIA A40",Jun Wang,240,2048 ARROMA-NEO,(7) 128-core 256G,,896,1792 ARROMA-OPERATION,(1) 80-core 768G,Jun Wang,80,768 BIGREDQ,(3) 80-core 384G,"Sara Mason JJ Urich",240,1152 BIGREDQ-HM,(1) 80-core 768G,Sara Mason,80,768 BIO-INSTR,(1) 128-core 512G,"Bin He Bradley Carson Jan Fassler Dan Holstad Bradley Carson Matthew Brockman Hugh Brown JJ Urich Aran Cox",128,512 BIOSTAT,"(1) 128-core 512G (1) 80-core 384G","Daniel Sewell Grant Brown Patrick Breheny Brian Smith Yuan Huang",208,896 BRAINBO-GPU,"(1) 128-core 512G w/ (1) NVIDIA A100 80GB PCIe (1) 112-core 512G w/ (1) NVIDIA A100 80GB PCIe","Rainbo Hultman Benjamin Hing",240,1024 BV,,Bess Vlaisavljevich,0,0 BVHIGH,,Bess Vlaisavljevich,0,0 CASMA,(2) 112-core 512G,Jonathan Templin,224,1024 CBIG,(1) 64-core 192G w/ (1) NVIDIA TITAN V JHH Special Edition,"Mathews Jacob Qing Zou Qing Zou",64,192 CBIG-A100,(1) 112-core 1024G w/ (2) NVIDIA A100 80GB PCIe,"Xiaodong Wu Mathews Jacob",112,1024 CGRER,(4) 80-core 192G,Jeremie Moen,320,768 CLAS-INSTR-GPU,"(1) 40-core 192G w/ (1) NVIDIA GeForce GTX 1080 Ti (1) 40-core 192G w/ (2) NVIDIA GeForce GTX 1080 Ti (One node with single, one node with two accelerators)","Bradley Carson Dan Holstad Bradley Carson Matthew Brockman Hugh Brown JJ Urich",80,384 COB,(1) 128-core 1024G,Brian Heil,128,1024 COB-GPU,(1) 40-core 192G w/ (2) NVIDIA TITAN V,Brian Heil,40,192 CODBCB,(1) 64-core 384G w/ (1) NVIDIA TITAN V,"Xian Xie Brad Amendt Erliang Zeng",64,384 COE,"(8) 128-core 512G Note: Users are restricted to no more than three running jobs in the COE queue.",Matt McLaughlin,1024,4096 COE-GPU,"(2) 40-core 192G w/ (4) NVIDIA GeForce GTX 1080 Ti (2) 40-core 192G w/ (4) NVIDIA TITAN V",Matt McLaughlin,160,768 EES,(10) 80-core 192G w/ (1) Tesla V100S-PCIE-32GB,"William Barnhart JJ Urich",800,1920 EHRM,(1) 80-core 192G w/ (2) NVIDIA GeForce RTX 2080 Ti,,80,192 FERBIN,"(14) 80-core 96G w/ (4) NVIDIA GeForce RTX 2080 Ti (4) 112-core 256G w/ (4) NVIDIA A10","Adrian Elcock Robert McDonnell",1568,2368 FOLLAND-LAB,(1) 80-core 384G w/ (1) Tesla V100S-PCIE-32GB,Thomas Folland,80,384 GEOPHYSICS,(2) 80-core 192G,"William Barnhart JJ Urich",160,384 GV,"(7) 128-core 1024G (2) 112-core 512G","Mark Wilson Brian Miller Gabriele Villarini",1120,8192 HARRY,(2) 128-core 1024G,"Claudio Margulis Dishan Das",256,2048 HCCC,(3) 112-core 1024G,"Garay, Raygoza",336,3072 IDOT-FloodPeaks,(4) 80-core 384G,Gabriele Villarini,320,1536 IFC,(10) 112-core 512G,"Mark Wilson Brian Miller",1120,5120 IIHG,(4) 128-core 512G,Michael Chimenti,512,2048 INFORMATICS-GPU,(2) 40-core 192G w/ (3) NVIDIA TITAN V,Research Services,80,384 IRRC,(1) 64-core 768G,Benjamin Walizer,64,768 IVR,(2) 128-core 512G,Todd Scheetz,256,1024 JG,"(8) 80-core 768G (2) 80-core 192G w/ (6) NVIDIA GeForce RTX 2080 Ti (1) 80-core 768G w/ (4) NVIDIA GeForce RTX 2080 Ti (1) 80-core 192G (1) 80-core 192G w/ (8) NVIDIA GeForce RTX 2080 Ti",Joe Gomes,1040,7680 JM,(1) 80-core 384G,"Jacob Michaelson Tanner Koomar Ethan Bahl Leo Brueggeman Taylor Thomas",80,384 JM-GPU,"(1) 80-core 768G w/ (1) Tesla V100-PCIE-32GB (1) 80-core 1.5T w/ (6) NVIDIA GeForce RTX 2080 Ti (1) 128-core 1.5T w/ (1) NVIDIA L40S",Jake Michaelson,288,3768.0 LT,(2) 128-core 512G,Luke Tierney,256,1024 LUNG,"(2) 64-core 768G w/ (2) Tesla V100-PCIE-32GB (2) 112-core 1024G w/ (4) NVIDIA A100 80GB PCIe (1) 80-core 768G w/ (4) Quadro RTX 8000(nvlink) (1) 128-core 1024G","Joseph Reinhardt Bidgoli, Motahari Sarah Gerard",560,5376 MANSCI,"(1) 128-core 1024G (1) 112-core 512G",Qihang Lin,240,1536 MANSCI-GPU,"(2) 64-core 384G w/ (4) NVIDIA GeForce RTX 2080 Ti (1) 80-core 384G w/ (4) NVIDIA GeForce RTX 2080 Ti (1) 128-core 1024G w/ (4) NVIDIA A40",Qihang Lin,336,2176 MF,(4) 128-core 1024G,Michael Flatte,512,4096 MIL,(1) 112-core 1024G,"Merry Mani Melissa Lawrence Melissa Lawrence",112,1024 MILES,(2) 128-core 256G,,256,512 MIRO,,"Ramirez, Miro",0,0 MIROHI,,"Ramirez, Miro",0,0 MORL,"(2) 128-core 1024G (1) 128-core 512G w/ (1) NVIDIA L40S",William Walls,384,2560 MS,"(6) 40-core 96G w/ (4) NVIDIA GeForce GTX 1080 Ti (2) 128-core 256G (2) 80-core 96G w/ (8) NVIDIA GeForce RTX 2080 Ti (1) 80-core 384G w/ (4) NVIDIA GeForce RTX 2080 Ti (1) 112-core 128G w/ (6) NVIDIA A10 (1) 40-core 96G w/ (4) NVIDIA TITAN V",Michael Schnieders,888,1888 NBW,(1) 128-core 512G,"Nathan Wikle Nathan Wikle",128,512 PINC,(6) 128-core 512G,Jason Evans,768,3072 PINC-HM,(3) 80-core 768G,Jason Evans,240,2304 QUANTUM,(1) 112-core 1024G w/ (2) NVIDIA A40(nvlink),"Gary Christensen Fatima Toor",112,1024 RSKZ,(1) 112-core 1024G,"Jim Chaffee Kang Zhao Rong Su",112,1024 SEASHORE,"(2) 80-core 768G (1) 128-core 1024G (1) 112-core 512G (1) 128-core 1024G w/ (1) NVIDIA L40S","Kai Hwang Jiefeng Jiang Dorit Kliemann",528,4096 SEG,(1) 128-core 1024G,Sarah Gerard,128,1024 SGL,(1) 128-core 1024G w/ (4) NVIDIA RTX A6000,Sajan Lingala,128,1024 SHL,"(3) 128-core 1024G (2) 80-core 768G (1) 80-core 384G w/ (1) Tesla V100-PCIE-32GB","Valerie Reeb Alankar Kampoowale Wesley Hottel",624,4992 SYMPT,(1) 128-core 512G w/ (4) NVIDIA L40S,,128,512 TELOMERE2,(1) 128-core 512G,Josep Comeron,128,512 TEMPLIN,(1) 80-core 768G,Jonathan Templin,80,768 UDAY,(7) 128-core 512G,"Mark Wilson Brian Miller H Udaykumar",896,3584 UIOBL,"(2) 80-core 384G w/ (2) Tesla V100-PCIE-32GB (2) 112-core 512G w/ (2) NVIDIA A40(nvlink) (2) 112-core 512G w/ (2) NVIDIA A40","Joshua Johnson Don Anderson Jessica Goetz Jacob Elkins",608,2816 VOSSHBC,(1) 112-core 512G,Michelle Voss,112,512 WEIRANWANG-GROUP,,Weiran Wang,0,0 |
A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into queues named UI or prefixed with UI-.
UI → Default queue
UI-HM→ request only for jobs that need more memory than can be met with the standard nodes.
UI-MPI → MPI jobs; request only for jobs that can take advantage of multiple nodes.
UI-GPU → Contains nodes with GPU accelerators; request only if job can use a GPU accelerator.
UI-DEVELOP → Meant for small, short running job prototypes and debugging.
These queues are available to everyone who has an account on an HPC system. Since that is a fairly large user base there are limits placed on these shared queues. Also note that there is a limit of 50000 active (running and pending) jobs per user on the system.
Queue,Node Description,Wall clock limit,Running jobs per user UI,"(28) 80-core 384G (24) 128-core 512G (1) 112-core 512G (1) 112-core 1024G",None,5 UI-DEVELOP,"(1) 128-core 256G (1) 128-core 256G w/ (1) NVIDIA A40",24 hours,1 UI-GPU,"(8) 80-core 384G w/ (4) NVIDIA GeForce RTX 2080 Ti (4) 128-core 512G w/ (4) NVIDIA A40 (4) 128-core 512G w/ (1) NVIDIA A100 80GB PCIe (3) 40-core 192G w/ (4) NVIDIA TITAN V (2) 112-core 256G w/ (8) NVIDIA A10 (2) 112-core 256G w/ (4) NVIDIA A40(nvlink) (2) 128-core 512G (1) 128-core 512G w/ (4) NVIDIA L40S (1) 80-core 384G w/ (1) Tesla V100-PCIE-32GB (1) 40-core 192G w/ (4) NVIDIA GeForce GTX 1080 Ti (1) 64-core 384G w/ (4) NVIDIA GeForce GTX 1080 Ti (1) 64-core 384G w/ (3) NVIDIA GeForce GTX 1080 Ti (1) 64-core 192G w/ (1) NVIDIA GeForce GTX 1080 Ti (1) 64-core 192G w/ (2) NVIDIA TITAN V (1) 64-core 768G w/ (2) NVIDIA GeForce GTX 1080 Ti (1) 128-core 512G w/ (4) NVIDIA L4",None,1 UI-GPU-HM,"(1) 128-core 1024G w/ (1) NVIDIA A100 80GB PCIe (1) 128-core 1024G (1) 80-core 1.5T w/ (8) NVIDIA GeForce RTX 2080 Ti (1) 128-core 1024G w/ (4) NVIDIA A40",24 hours,1 UI-HM,"(6) 128-core 1024G (2) 64-core 768G (1) 80-core 1.4T (1) 80-core 1.5T",None,1 UI-MPI,(24) 128-core 512G,48 hours,1 |
Note that the number of slots available in the UI queue can vary depending on whether anyone has purchased a reservation of nodes. The UI queue is the default queue and will be used if no queue is specified. This queue is available to everyone who has an account on a UI HPC cluster system.
Please use the UI-DEVELOP queue for testing new jobs at a smaller scale before committing many nodes to your job. |
This queue encompasses all of the nodes and contains all of the available job slots. It is available to everyone with an account and there are no running job limits. However, it is a low priority queue instance on the same nodes as the higher priority investor and UI queue instances. The all.q queue is subordinate to these other queues and jobs running in it will give up the nodes they are running on when jobs in the high priority queues need them. The term we use for this is "job eviction". Jobs running in the all.q queue are the only ones subject to this.
Queue,Node Description,Slots,Total Memory (GB) all.q,"(84) 128-core 512G (40) 80-core 384G (31) 128-core 1024G (20) 112-core 512G (20) 80-core 768G (14) 80-core 96G w/ (4) NVIDIA GeForce RTX 2080 Ti (14) 128-core 256G (13) 80-core 192G (10) 80-core 384G w/ (4) NVIDIA GeForce RTX 2080 Ti (10) 80-core 192G w/ (1) Tesla V100S-PCIE-32GB (6) 112-core 1024G (6) 128-core 512G w/ (1) NVIDIA A100 80GB PCIe (6) 40-core 96G w/ (4) NVIDIA GeForce GTX 1080 Ti (5) 40-core 192G w/ (4) NVIDIA TITAN V (4) 128-core 512G w/ (4) NVIDIA A40 (4) 128-core 512G w/ (4) NVIDIA L40S (4) 112-core 256G w/ (4) NVIDIA A10 (4) 80-core 1.5T (4) 1-core 1.5T (3) 128-core 1024G w/ (4) NVIDIA A40 (3) 64-core 768G (3) 40-core 192G w/ (4) NVIDIA GeForce GTX 1080 Ti (2) 64-core 768G w/ (2) Tesla V100-PCIE-32GB (2) 64-core 384G w/ (4) NVIDIA GeForce RTX 2080 Ti (2) 40-core 192G w/ (3) NVIDIA TITAN V (2) 80-core 192G w/ (6) NVIDIA GeForce RTX 2080 Ti (2) 80-core 768G w/ (4) NVIDIA GeForce RTX 2080 Ti (2) 80-core 192G w/ (2) NVIDIA GeForce RTX 2080 Ti (2) 112-core 1.5T (2) 112-core 512G w/ (2) NVIDIA A40 (2) 112-core 512G w/ (2) NVIDIA A40(nvlink) (2) 112-core 1024G w/ (4) NVIDIA A100 80GB PCIe (2) 112-core 256G w/ (8) NVIDIA A10 (2) 80-core 96G w/ (8) NVIDIA GeForce RTX 2080 Ti (2) 112-core 256G w/ (4) NVIDIA A40 (2) 80-core 384G w/ (1) Tesla V100-PCIE-32GB (2) 128-core 1024G w/ (1) NVIDIA A100 80GB PCIe (2) 80-core 384G w/ (2) Tesla V100-PCIE-32GB (1) 80-core 192G w/ (8) NVIDIA GeForce RTX 2080 Ti (1) 80-core 768G w/ (4) Quadro RTX 8000(nvlink) (1) 112-core 1024G w/ (2) NVIDIA A100 80GB PCIe (1) 112-core 128G w/ (6) NVIDIA A10 (1) 40-core 192G w/ (1) NVIDIA GeForce GTX 1080 Ti (1) 40-core 192G w/ (2) NVIDIA GeForce GTX 1080 Ti (1) 40-core 192G w/ (2) NVIDIA TITAN V (1) 80-core 768G w/ (1) Tesla V100-PCIE-32GB (1) 80-core 1.5T w/ (8) NVIDIA GeForce RTX 2080 Ti (1) 80-core 1.5T w/ (6) NVIDIA GeForce RTX 2080 Ti (1) 40-core 96G w/ (4) NVIDIA TITAN V (1) 64-core 384G w/ (4) NVIDIA GeForce GTX 1080 Ti (1) 64-core 384G w/ (3) NVIDIA GeForce GTX 1080 Ti (1) 64-core 384G w/ (1) NVIDIA TITAN V (1) 64-core 192G w/ (1) NVIDIA TITAN V JHH Special Edition (1) 64-core 192G w/ (1) NVIDIA GeForce GTX 1080 Ti (1) 64-core 192G w/ (2) NVIDIA TITAN V (1) 64-core 768G w/ (2) NVIDIA GeForce GTX 1080 Ti (1) 80-core 384G w/ (1) Tesla V100S-PCIE-32GB (1) 112-core 1024G w/ (2) NVIDIA A40(nvlink) (1) 112-core 256G (1) 112-core 1024G w/ (4) NVIDIA A30 (1) 128-core 1024G w/ (4) NVIDIA RTX A6000 (1) 128-core 1024G w/ (1) NVIDIA L40S (1) 128-core 512G w/ (4) NVIDIA L4 (1) 128-core 1.5T w/ (1) NVIDIA L40S (1) 128-core 512G w/ (1) NVIDIA L40S (1) 112-core 512G w/ (1) NVIDIA A100 80GB PCIe",36980,196428.0 |
In addition to the above, there are some nodes that are not part of any investor queue. These are only available in the all.q queue and are used for node rentals and future purchases. The number of nodes for this purpose varies.
It may not always be obvious, particularly if you are a member of an investor group, which is the best queue to submit a job to. As a guideline, if you are in an investor group and there are enough free slots in your queue for your job(s) then you should use the investor queue. If you are not in an investor group, or there are not enough free slots in your investor queue, you should submit parallel jobs to the UI queue. If not submitting to an investor queue, and if your jobs are serial jobs, they should generally be submitted to the all.q queue. Unless you have a small number of jobs, and/or can not risk them getting evicted, then use the UI queue.
To see which investor group you are associated with (if any) use the following command:
whichq |
It is anticipated that members of an investment group will have their own system for deciding who runs what on their dedicated resources.
As an example, if you are a member of the CGRER investment group and want to determine how many slots are currently available, the following command can be used:
qstat -g c -q CGRER |
This will generate output like the following, which indicates that 464 slots are available out of the 560 tot slots allocated to the CGRER queue:
CLUSTER QUEUE CQLOAD USED RES AVAIL TOTAL aoACDS cdsuE -------------------------------------------------------------------------------- CGRER 0.77 96 0 464 560 0 0 |
While not indicated in the above, a parallel job can be submitted to the all.q queue. Since a parallel job likely runs on more than one node, the likelihood of a job getting evicted is increased. Thus, it is recommended that parallel jobs be submitted to the UI queue in preference to the all.q queue.
For queues that consist of all nodes containing a GPU, and are split out into a QUEUE-GPU queue, the policy is to set the ngpus
resource to 1 if not explicitly set. For other queues that contain GPU nodes the policy has been set by the queue owner to either request a GPU by default or not.