Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 67 Next »

Nodes on Argon are separated into 3 types of queues:

  • Investor queues: nodes purchased by investors. Access to these queues is managed by the investors and their delegates.

  • UI queues: centrally funded nodes which are available to everyone who has an HPC account.

  • all.q queue: cluster wide queue

Investor Queues

To request access to an investor queue, please contact the queue manager listed below.

Queue

Node Description

Queue Manager

Slots

Total memory (GB)

ACB

(1) 56-core 256G

Adam Dupuy

56

256

AIS

(1) 56-core 256G

Grant Brown

56

256

AML

(1) 56-core 256G

Aaron Miller

56

256

AML-HM

(1) 80-core 1.5T

Aaron Miller

80

1500

ANTH

(4) 56-core 128G

Andrew Kitchen

224

512

ARROMA

(8) 56-core 128G
(1) 80-core 768G

Jun Wang

528

1792

ARROMA-80

(4) 80-core 192G

Jun Wang

320

768

ARROMA-Analysis

(1) 80-core 768G

Jun Wang

80

768

ARROMA-MAIA

(2) 80-core 192G
(2) 80-core 384G

Jun Wang

320

1152

ARROMA-OPERATION

(1) 80-core 768G

Jun Wang

80

768

AS

(5) 56-core 256G

Katharine Corum

280

1280

AT

(1) 80-core 1.4T

Ashish Towari

80

1400

BH

(1) 56-core 512G

Bin He

56

512

BIGREDQ

(13) 56-core 256G

Sara Mason

728

3328

BIO-INSTR

(3) 56-core 256G

Brad Carson
Bin He
Jan Fassler

168

768

BIOLOGY

(1) 56-core 256G

Matthew Brockman

56

256

BIOSTAT

(2) 56-core 128G

Patrick Breheny
Grant Brown
Yuan Huang
Dan Sewell
Brian Smith

112

256

BLAYES

(1) 56-core 512G

Sanvesh Srivastava

56

512

CBIG

(1) 64-core 192G w/ (1) TITAN V JHH Special Edition
(1) 56-core 256G w/ (1) TITAN V

Mathews Jacob

120

448

CBIG-HM

(1) 56-core 512G w/ (2) Tesla P100-PCIE-16GB

Mathews Jacob

56

512

CCOM

(18) 56-core 512G
5 Running jobs per user

Boyd Knosp

1008

9216

CCOM-GPU

(2) 56-core 512G w/ (1) Tesla P100-PCIE-16GB

Boyd Knosp

112

1024

CGRER

(10) 56-core 128G
(4) 80-core 192G

Jeremie Moen

880

2048

CHEMISTRY

(3) 56-core 256G

Brad Carson

168

768

CLAS-INSTR

(2) 56-core 256G

Brad Carson

112

512

CLAS-INSTR-GPU

(1) 40-core 192G w/ (1) GeForce GTX 1080 Ti
(1) 40-core 192G w/ (2) GeForce GTX 1080 Ti
(One node with single, one node with two accelerators)

Brad Carson

80

384

CLL

(5) 56-core 128G

Mark Wilson
Brian Miller

280

640

COB

(2) 56-core 256G

Brian Heil

112

512

COB-GPU

(1) 40-core 192G w/ (2) TITAN V

Brian Heil

40

192

CODBCB

(1) 64-core 384G w/ (1) TITAN V

Brad A Amendt
Xian Jin Xie

64

384

COE

(10) 56-core 256G
Note: Users are restricted to no more than three running jobs in the COE queue.

Matt McLaughlin

560

2560

COE-GPU

(2) 40-core 192G w/ (4) GeForce GTX 1080 Ti
(2) 40-core 192G w/ (4) TITAN V

Matt McLaughlin

160

768

COVID19

(2) 80-core 384G
(2) 80-core 192G w/ (2) GeForce RTX 2080 Ti
(2) 80-core 768G w/ (4) GeForce RTX 2080 Ti
(1) 80-core 384G w/ (4) GeForce RTX 2080 Ti

Research Services

480

2688

DARBROB

(1) 56-core 256G

Benjamin Darbro

56

256

EES

(8) 80-core 192G w/ (1) Tesla V100S-PCIE-32GB

William Barnhart

640

1536

FERBIN

(14) 80-core 96G w/ (4) GeForce RTX 2080 Ti
(13) 56-core 128G

Adrian Elcock

1848

3008

FFME

(16) 56-core 128G

Mark Wilson
Brian Miller

896

2048

FFME-HM

(1) 56-core 512G

Mark Wilson

56

512

FLUIDSLAB

(8) 56-core 128G

Mark Wilson
Brian Miller

448

1024

FOLLAND-LAB

(1) 80-core 384G w/ (1) Tesla V100-PCIE-32GB

Tom Folland

80

384

GEOPHYSICS

(5) 56-core 128G
(2) 80-core 192G

William Barnhart

440

1024

GV

(2) 56-core 256G

Gabriele Villarini
Mark Wilson
Brian Miller

112

512

HJ

(10) 56-core 128G

Hans Johnson

560

1280

HJ-GPU

(1) 56-core 512G w/ (1) Tesla P100-PCIE-16GB

Hans Johnson

56

512

IDOT-FloodPeaks

(4) 80-core 384G

Brian Miller

320

1536

IFC

(10) 56-core 256G

Mark Wilson
Brian Miller

560

2560

IIHG

(10) 56-core 256G

Diana Kolbe

560

2560

INFORMATICS

(12) 56-core 256G

Danny Tang
UI3 Faculty

672

3072

INFORMATICS-GPU

(2) 40-core 192G w/ (3) TITAN V
(2) 56-core 256G w/ (1) TITAN V

Ben Rogers

192

896

INFORMATICS-HM-GPU

(1) 56-core 512G w/ (2) Tesla P100-PCIE-16GB

Ben Rogers

56

512

IRRC

(1) 64-core 768G

Ariel Aloe

64

768

IVR

(4) 56-core 256G
(1) 56-core 512G

Todd Scheetz

280

1536

IVR-GPU

(1) 56-core 512G w/ (2) Tesla K80

Todd Scheetz

56

512

IVRVOLTA

(4) 56-core 512G w/ (1) TITAN V

Mike Schnieders

224

2048

IWA

(11) 56-core 128G

Mark Wilson
Brian Miller

616

1408

JES

(1) 56-core 512G

Jacob Simmering

56

512

JG

(8) 80-core 768G
(4) 80-core 192G w/ (8) GeForce RTX 2080 Ti

Joe Gomes

960

6912

JM

(3) 56-core 512G
(1) 80-core 384G

Jake Michaelson

248

1920

JM-GPU

(1) 80-core 768G w/ (1) Tesla V100-PCIE-32GB
(1) 80-core 1.5T w/ (6) GeForce RTX 2080 Ti
(1) 56-core 512G w/ (1) Tesla P100-PCIE-16GB

Jake Michaelson

216

2780

JP

(2) 56-core 512G

Virginia Willour

112

1024

JS

(14) 56-core 256G
(1) 56-core 512G

James Shepherd

840

4096

KA

(1) 56-core 512G

Kin Fai Au

56

512

LT

(2) 56-core 512G w/ (1) Tesla P100-PCIE-16GB

Luke Tierney

112

1024

LUNG

(2) 56-core 512G w/ (1) Tesla P40
(2) 64-core 768G w/ (2) Tesla V100-PCIE-32GB
(1) 80-core 768G w/ (4) Quadro RTX 8000(nvlink)

Joe Reinhardt

320

3328

MANORG

(1) 56-core 128G

Michele Williams
Brian Heil

56

128

MANSCI

(1) 56-core 128G

Qihang Lin

56

128

MANSCI-GPU

(2) 64-core 384G w/ (4) GeForce RTX 2080 Ti
(1) 80-core 384G w/ (4) GeForce RTX 2080 Ti
(1) 56-core 512G w/ (1) Tesla P100-PCIE-16GB

Qihang Lin

264

1664

MF

(6) 56-core 128G

Michael Flatte

336

768

MF-HM

(2) 56-core 512G

Michael Flatte

112

1024

MORL

(5) 56-core 256G

William (Daniel) Walls

280

1280

MS

(7) 40-core 96G w/ (4) GeForce GTX 1080 Ti
(5) 56-core 256G w/ (2) Tesla P100-PCIE-16GB
(2) 80-core 96G w/ (8) GeForce RTX 2080 Ti
(1) 40-core 96G w/ (4) TITAN V

Mike Schnieders

760

2240

NEURO

(1) 56-core 256G

Marie Gaine
Ted Abel

56

256

NEUROSURGERY

(1) 56-core 512G w/ (2) Tesla K80

Haiming Chen

56

512

NOLA

(1) 56-core 512G

Ed Sander

56

512

PINC

(6) 56-core 256G

Jason Evans

336

1536

REX

(4) 56-core 128G

Mark Wilson
Brian Miller

224

512

REX-HM

(1) 56-core 512G

Mark Wilson
Brian Miller

56

512

RP

(5) 56-core 512G

Robert Philibert

280

2560

SB

(4) 56-core 128G

Scott Baalrud

224

512

SEASHORE

(2) 80-core 768G

Kai Hwang

160

1536

SEMI

(1) 56-core 128G

Craig Pryor

56

128

SLOAN

(1) 56-core 256G

Colleen Mitchel
Alaina Hanson

56

256

STATEPI

(1) 56-core 256G

Linnea Polgreen

56

256

TELOMER

(1) 56-core 512G

Anna Malkova

56

512

TEMPLIN

(1) 56-core 256G

Jonathan Templin

56

256

UDAY

(4) 56-core 128G

Mark Wilson
Brian Miller

224

512

UIOBL

(2) 80-core 384G w/ (2) Tesla V100-PCIE-32GB

Joshua Johnson

160

768

The University of Iowa (UI) queues

A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into queues named UI or prefixed with UI-.

  • UI → Default queue

  • UI-HM→ request only for jobs that need more memory than can be met with the standard nodes.

  • UI-MPI → MPI jobs; request only for jobs that can take advantage of multiple nodes.

  • UI-GPU → Contains nodes with GPU accelerators; request only if job can use a GPU accelerator.

  • UI-DEVELOP → Meant for small, short running job prototypes and debugging.

These queues are available to everyone who has an account on an HPC system. Since that is a fairly large user base there are limits placed on these shared queues. Also note that there is a limit of 50000 active (running and pending) jobs per user on the system.

Queue

Node Description

Wall clock limit

Running jobs per user

UI

(28) 80-core 384G
(14) 56-core 256G

None

5

UI-DEVELOP

(2) 56-core 256G
(1) 56-core 256G w/ (1) Tesla P100-PCIE-16GB

24 hours

1

UI-GPU

(8) 80-core 384G w/ (4) GeForce RTX 2080 Ti
(5) 56-core 256G w/ (1) Tesla P100-PCIE-16GB
(4) 40-core 192G w/ (4) TITAN V
(2) 40-core 192G w/ (4) GeForce GTX 1080 Ti
(1) 64-core 384G w/ (4) GeForce GTX 1080 Ti
(1) 64-core 384G w/ (3) GeForce GTX 1080 Ti
(1) 64-core 192G w/ (1) GeForce GTX 1080 Ti
(1) 64-core 192G w/ (2) TITAN V
(1) 64-core 768G w/ (2) GeForce GTX 1080 Ti
(1) 80-core 384G w/ (1) Tesla V100-PCIE-32GB

None

1

UI-GPU-HM

(1) 40-core 1.5T w/ (8) GeForce RTX 2080 Ti

24 hours

1

UI-HM

(2) 80-core 1.5T
(2) 64-core 768G
(2) 56-core 512G

None

1

UI-MPI

(25) 56-core 256G

48 hours

1


Note that the number of slots available in the UI queue can vary depending on whether anyone has purchased a reservation of nodes. The UI queue is the default queue and will be used if no queue is specified. This queue is available to everyone who has an account on a UI HPC cluster system. 

Please use the UI-DEVELOP queue for testing new jobs at a smaller scale before committing many nodes to your job.

The all.q queue

This queue encompasses all of the nodes and contains all of the available job slots. It is available to everyone with an account and there are no running job limits. However, it is a low priority queue instance on the same nodes as the higher priority investor and UI queue instances. The all.q queue is subordinate to these other queues and jobs running in it will give up the nodes they are running on when jobs in the high priority queues need them. The term we use for this is "job eviction". Jobs running in the all.q queue are the only ones subject to this.

Node Description

Slots

Total Memory (GB)

(168) 56-core 256G
(115) 56-core 128G
(60) 56-core 512G
(54) 80-core 384G
(26) 80-core 192G
(20) 80-core 768G
(14) 80-core 96G w/ (4) GeForce RTX 2080 Ti
(10) 80-core 384G w/ (4) GeForce RTX 2080 Ti
(8) 56-core 512G w/ (1) Tesla P100-PCIE-16GB
(8) 80-core 192G w/ (1) Tesla V100S-PCIE-32GB
(7) 40-core 96G w/ (4) GeForce GTX 1080 Ti
(6) 80-core 1.5T
(6) 40-core 192G w/ (4) TITAN V
(6) 64-core 768G
(6) 56-core 256G w/ (1) Tesla P100-PCIE-16GB
(5) 56-core 256G w/ (2) Tesla P100-PCIE-16GB
(4) 80-core 192G w/ (8) GeForce RTX 2080 Ti
(4) 56-core 512G w/ (1) TITAN V
(4) 40-core 192G w/ (4) GeForce GTX 1080 Ti
(3) 56-core 256G w/ (1) TITAN V
(2) 56-core 512G w/ (2) Tesla K80
(2) 56-core 512G w/ (2) Tesla P100-PCIE-16GB
(2) 64-core 384G w/ (4) GeForce RTX 2080 Ti
(2) 80-core 96G w/ (8) GeForce RTX 2080 Ti
(2) 64-core 768G w/ (2) Tesla V100-PCIE-32GB
(2) 80-core 384G w/ (2) Tesla V100-PCIE-32GB
(2) 56-core 512G w/ (1) Tesla P40
(2) 80-core 1.5T w/ (8) GeForce RTX 2080 Ti
(2) 80-core 768G w/ (4) GeForce RTX 2080 Ti
(2) 40-core 192G w/ (3) TITAN V
(2) 80-core 384G w/ (1) Tesla V100-PCIE-32GB
(2) 80-core 192G w/ (2) GeForce RTX 2080 Ti
(1) 64-core 768G w/ (2) GeForce GTX 1080 Ti
(1) 80-core 768G w/ (1) Tesla V100-PCIE-32GB
(1) 40-core 192G w/ (1) GeForce GTX 1080 Ti
(1) 40-core 192G w/ (2) GeForce GTX 1080 Ti
(1) 40-core 192G w/ (2) TITAN V
(1) 80-core 1.5T w/ (6) GeForce RTX 2080 Ti
(1) 80-core 1.4T
(1) 80-core 768G w/ (4) Quadro RTX 8000(nvlink)
(1) 40-core 96G w/ (4) TITAN V
(1) 64-core 384G w/ (4) GeForce GTX 1080 Ti
(1) 64-core 384G w/ (3) GeForce GTX 1080 Ti
(1) 64-core 384G w/ (1) TITAN V
(1) 64-core 192G w/ (1) TITAN V JHH Special Edition
(1) 64-core 192G w/ (1) GeForce GTX 1080 Ti
(1) 64-core 192G w/ (2) TITAN V

30864

152224

In addition to the above, there are some nodes that are not part of any investor queue. These are only available in the all.q queue and are used for node rentals and future purchases. The number of nodes for this purpose varies.

GPU selection policy

For queues that consist of all nodes containing a GPU, and are split out into a QUEUE-GPU queue, the policy is to set the ngpus resource to 1 if not explicitly set. For other queues that contain GPU nodes the policy has been set by the queue owner to either request a GPU by default or not.

Guidelines for selecting a queue

Guidelines for selecting a queue

It may not always be obvious, particularly if you are a member of an investor group, which is the best queue to submit a job to. As a guideline, if you are in an investor group and there are enough free slots in your queue for your job(s) then you should use the investor queue. If you are not in an investor group, or there are not enough free slots in your investor queue, you should submit parallel jobs to the UI queue. If not submitting to an investor queue, and if your jobs are serial jobs, they should generally be submitted to the all.q queue. Unless you have a small number of jobs, and/or can not risk them getting evicted, then use the UI queue.

To see which investor group you are associated with (if any) use the following command:

whichq

It is anticipated that members of an investment group will have their own system for deciding who runs what on their dedicated resources.

As an example, if you are a member of the CGRER investment group and want to determine how many slots are currently available, the following command can be used:

qstat -g c -q CGRER

This will generate output like the following, which indicates that 464 slots are available out of the 560 tot slots allocated to the CGRER queue:

CLUSTER QUEUE                   CQLOAD   USED    RES  AVAIL  TOTAL aoACDS  cdsuE
--------------------------------------------------------------------------------
CGRER                             0.77     96      0    464    560      0      0

Queue decision.png

While not indicated in the above, a parallel job can be submitted to the all.q queue. Since a parallel job likely runs on more than one node, the likelihood of a job getting evicted is increased. Thus, it is recommended that parallel jobs be submitted to the UI queue in preference to the all.q queue.

 

 

  • No labels