Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

The Argon HPC system is the latest HPC system of the University of Iowa. It consists of 366 compute nodes running CentOS-7.4 Linux. There are several compute node configurations, 

  1. 24-core 512GB
  2. 5632-core 128GB64GB
  3. 5632-core 256GB
  4. 5632-core 512GB
  5. 40-core 96GB
  6. 40-core 192GB
  7. 56-core 128GB
  8. 56-core 256GB
  9. 56-core 512GB
  10. 64-core 192GB

The Argon cluster is split between two data centers,

  • ITF → Information Technology Facility
  • LC→ Lindquist Center

There are 22 21 machines with Nvidia P100 accelerators, 2 machines with Nvidia K80 accelerators, 11 machines with NVidia K20 accelerators, 2 machines with Nvidia P40 accelerators, 13 machines with 1080Ti accelerators, and 16 18 machines with Titan V accelerators. Most of the nodes in the LC datacenter are connected with the OmniPath high speed interconnect fabric, while most of those in the ITGF data center are connected with the InfiniPath fabric.

Info

The Titan V is now considered as a supported configuration in Argon phase 1 GPU-capable compute nodes but is restricted to a single card per node. Staff have completed the qualification process for the 1080 Ti and concluded that it is not a viable solution to add to current phase 1 Argon compute nodes.


Info

The Rpeak and memory numbers need to be updated.

The Rpeak (theoretical Flops) is 385.0 TFlops, not including the accelerators, with 89.7 TB of memory. In addition, there are 2 login nodes of the Broadwell system architecture, with 256GB of memory each. 

...

Note that code must be optimized during compilation to take advantage of AVX instructions. The CPU architecture is important to keep in mind both in terms of potential performance and compatibility. For instance, code optimized for AVX2 instructions will not run on the Sandybridge/Ivybridge architecture because it only supports AVX, not AVX2. However, each successive generation is backward compatible so code optimized with AVX instructions will run on Haswell/Broadwell systems.

...

.

Hyperthreaded Cores (HT)

One important difference between Argon and previous systems is that Argon has Hyperthreaded processor cores turned on. Hyperthreaded cores can be thought of as splitting a single processor into two virtual cores, much as a Linux process can be split into threads. That oversimplifies it but if your application is multithreaded then hyperthreaded cores can potentially run the application more efficiently. For non-threaded applications you can think of any pair of hyperthreaded cores to be roughly equivalent to two cores at half the speed if both cores of the pair are in use. This can help ensure that the physical processor is kept busy for processes that do not always use the full capacity of a core. The reasons for enabling HT for Argon are to try to increase system efficiency on the workloads that we have observed. There are some thing to keep in mind as you are developing your workflows.

...

If your job does not use the system openmpi, or does not use MPI, then any desired core binding will need to be set up with whatever mechanism the software uses. Otherwise, there will be no core binding. Again, that may not be a major issue. If your job does not work well with HT then run on a number of cores equal to half of the number of slots requested and the OS scheduler will minimize contention. 

new SGE utilities

While SoGE is very similar to previous versions of SGE there are some new utilities that people may find of interest. There are manual pages for each of these.

...