Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

The Argon HPC system is the latest HPC system of the University of Iowa. It consists of 346 365 compute nodes , each of which contain 28 2.4GHz Intel Broadwell processor cores, running CentOS-7.4 Linux. There are several compute node configurations, 

  1. standard memory → 56-core 128GB
  2. mid56-memory → core 256GB
  3. high56-memory → 512GBcore 512GB
  4. 40-core 96GB
  5. 40-core 192GB

There are 16 22 machines with single Nvidia P100 accelerators, 6 2 machines with dual Nvidia P100 K80 accelerators, 2 machines with an Nvidia K80 acceleratorP40 accelerators, 2 12 machines with an Nvidia P40 accelerator1080Ti accelerators, and 6 16 machines with a Titan V acceleratoraccelerators.

Info

The Titan V is now considered as a supported configuration in GPU-capable compute nodes but is restricted to a single card per node. Staff have completed the qualification process for the 1080 Ti and concluded that it is not a viable solution to add to current Argon compute nodes.

The Rpeak (theoretical Flops) is 285is 399.60 77 TFlops, not including the accelerators, with 67.25 TB of memory. In addition, there are 2 login nodes of the same Broadwell system architecture. The login nodes have 256GB of memory.

...

If your job does not use the system openmpi, or does not use MPI, then any desired core binding will need to be set up with whatever mechanism the software uses. Otherwise, there will be no core binding. Again, that may not be a major issue. If your job does not work well with HT then run on a number of cores equal to half of the number of slots requested and the OS scheduler will minimize contention. 

new SGE utilities

While SoGE is very similar to previous versions of SGE there are some new utilities that people may find of interest. There are manual pages for each of these.

...

Table plus
sortColumn1
allowExporttrue
columnTypesS,S,S,I,I


QueueNode DescriptionQueue ManagerSlotsTotal memory (GB)
AML(1) mid memory56-core 256GBAaron Miller56256
ANTH(4) standard memory56-core 128GBAndrew Kitchen224512

ARROMA

(8) standard memory56-core 128GBJun Wang4481024
AS(5) mid memory56-core 256GB

Katharine Corum

2801280
BH(1) high memory56-core 512GBBin He56512

BIGREDQ

(13) mid memory56-core 256GB

Sara Mason

7283328

BIOLOGY

(1) mid memory56-core 256GB

Matthew Brockman

56256

BIOSTAT

(2) standard memory56-core 128GBGrant Brown112256
BIO-INSTR(3) mid memory56-core 256GBJJ Urich, Albert Erives168768
CBIG(1) mid memory 56-core 256GB with P100 acceleratorMathews Jacob56256
CBIG-HM(1) high memory 56-core 512GB with P100 acceleratorMathews Jacob56512
CCOM(18) high memory56-core 512GB
5 running jobs per user 

Boyd Knosp

10089216
CCOM-GPU(2) high memory 56-core 512GB with P100 accelerator

Boyd Knosp

1121024

CGRER + LMOS

(10) standard memory56-core 128GB

Jeremie Moen

5601280
CHEMISTRY(3) mid memory56-core 256GB

JJ Urich

168768

CLAS-INSTR

(2) mid memory56-core 256GB

JJ Urich

112512
CLL(5) standard memory56-core 128GB

Mark Wilson
Brian Miller 

280640
COB(2) mid memory56-core 256GBBrian Heil112512
COE

(10) mid memory56-core 256GB

Note: Users are restricted to no more than

three running jobs in the COE queue.

Matt McLaughlin

5602560

DARBROB

(1) mid memory56-core 256GB

Benjamin Darbro

56256
FERBIN(13) standard memory56-core 128GBAdrian Elcock7281664

MF

(6) standard memory 56-core 128GB 

Michael Flatte

336768
MF-HM(2) high memory56-core 512GBMichael Flatte1121024

FLUIDSLAB

(8) standard memory56-core 128GB

Mark Wilson
Brian Miller

4481024
AIS(1) mid memory56-core 256GBGrant Brown56256

GEOPHYSICS

(3) standard memory56-core 128GB

William Barnhart

168384
GV(2) mid memory56-core 256GB

Mark Wilson
Brian Miller

112512
HJ(10) standard memory56-core 128GBHans Johnson5601280
HJ-GPU(1) high memory 56-core 512GB with P100 acceleratorHans Johnson56512
IFC(10) mid memory 56-core 256GB 

Mark Wilson
Brian Miller

5602560
IIHG(10) mid memory56-core 256GB

Diana Kolbe

560256

INFORMATICS

(12) mid memory56-core 256GBBen Rogers6723072

INFORMATICS-GPU

(2) mid memory 56-core 256GB with Titan V acceleratorsBen Rogers112512

INFORMATICS-HM-GPU

(1) high memory 56-core 512GB with (2) P100 acceleratorsBen Rogers56512
IVR(4) mid memory56-core 256GB
(1) high memory 56-core 512GB 

Todd Scheetz

2801536
IVR-GPU(1) high memory 56-core 512GB with K80 acceleratorTodd Scheetz561536
IVRVOLTA(4) high memory 56-core 512GB with Titan VMike Schnieders2242048
IWA(11) standard memory56-core 128GB

Mark Wilson
Brian Miller

6161408
JM(3) high memory56-core 512GB

Jake Michaelson

168512
JM-GPU(1) mid memory 56-core 256GB with P100 acceleratorJake Michaelson56512
JP(2) high memory56-core 512GB

Virginia Willour

1121024
JS(10) mid memory56-core 256GBJames Shepherd5602560
LUNG(2) high memory 56-core 512GB with P40 acceleratorJoe Reinhardt1121024
MANSCI(1) standard memory56-core 128GB

Qihang Lin

56128
MANSCI-GPU(1) high memory 56-core 512GB with P100 acceleratorQihang Lin56512
MANORG(1) standard memory56-core 128GBMichele Williams/Brian Heil56128

MORL

(5) mid memory56-core 256GB

William (Daniel) Walls

2801280
MS(5) 56-core 256G with (2) P100 GPUs
(7) 40-core 96G with (4) 1080Ti GPUs
(1) 40-core 96G with (4) Titan V GPUS

Mike Schnieders

7802048
NEURO(1) mid memory56-core 256GBMarie Gaine/Ted Abel56256
NOLA(1) high memory56-core 512GBEd Sander56512
PINC(6) mid memory56-core 256GBJason Evans3361536
REX(4) standard memory56-core 128GB

Mark Wilson
Brian Miller

224512
REX-HM(1) high memory56-core 512GB

Mark Wilson
Brian Miller

56512
SB(4) standard memory56-core 128GB

Scott Baalrud

224512
STATEPI(1) mid56-memorycore 256GBLinnea Polgreen56256
UDAY(4) standard memory56-core 128GB

Mark Wilson
Brian Miller

224512
UI(20) mid memory56-core 256GB 11205120

UI-DEVELOP

(1) mid memory56-core 256GB
(1) mid memory 56-core 256GB with P100 accelerator
 112512
UI-GPU

(4) mid memory 56-core 256GB with P100 accelerator

 2241024
UI-HM(5) high memory56-core 512GB 2802560
UI-MPI

(19) mid memory56-core 256GB

 10644864
all.q

(115) standard memory56-core 128GB
(149) mid memory154) 56-core 256GB
(19) mid memory with 7) 56-core 256GB with (1) P100 accelerator
(49) high memory
(9) high memory with P100 accelerator
(2) high memory with K80 accelerator

 1920887998

5) 56-core 256GB with (2) P100 accelerators
(2) 56-core 256GB with (1) Titan V accelerator
(42) 56-core 512GB
(2) 56-core 512GB with (1) K80 accelerator
(9) 56-core 512GB with (1) P100 accelerator
(1) 56-core 512GB with (2) P100 accelerators
(2) 56-core 512GB with (1) P40 accelerator
(4) 56-core 512GB with (1) Titan V accelerator
(6) 40-core 192GB with (4) Titan V accelerators
(2) 40-core 192GB with (3) Titan V accelerators
(1) 40-core 192GB with (2) Titan V accelerators
(3) 40-core 192GB with (4) 1080Ti accelerators
(1) 40-core 192GB with (2) 1080Ti accelerators
(1) 40-core 192GB with (1) 1080Ti accelerator
(1) 40-core 96GB with (1) Titan V accelerator
(7) 40-core 96GB with (4) 1080Ti accelerator

 2008891648
NEUROSURGERY(1) high memory 56-core 512GB with K80 accelerator

Haiming Chen

56512
SEMI(1) standard memory56-core 128GB

Craig Pryor

56128
ACB(1) mid memory56-core 256GBAdam Dupuy56256
FFME(16) standard memory56-core 128GBMark Wilson8962048
FFME-HM(1) high memory56-core 512GBMark Wilson56512
RP(2) high memory56-core 512GBRobert Philibert1121024
LT(2) high memory 56-core 512GB with P100 acceleratorLuke Tierney1121024
KA(1) high memory56-core 512GBKin Fai Au56512



The University of Iowa (UI) queue

A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into queues named UI or prefixed with UI-.

  • UI → Default queue
  • UI-HM→ High memory 56-core 512GB nodes; request only for jobs that need more memory than can be met with the standard nodes.
  • UI-MPI → MPI jobs; request only for jobs that can take advantage of multiple nodes.
  • UI-GPU → Contains nodes with GPU accelerators; request only if job can use a GPU accelerator.
  • UI-DEVELOP → Meant for small, short running job prototypes and debugging.

...

Centrally funded queuesNode DescriptionWall clock limitRunning jobs per user
UI

(20) mid memory56-core 256GB

None2
UI-HM

(5) high memory56-core 512GB

None1

UI-MPI
(56 slot minimum)

(20) mid memory56-core 256GB

48 hours
UI-GPU

(4) mid memory 56-core 256GB with P100 accelerator

None1
UI-DEVELOP(1) mid memory56-core 256GB
(1) mid memory 56-core 256GB with P100 accelerator 
24 hours1

...