Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: add node types

...

  1. 40-core 96GB
  2. 40-core 192GB
  3. 56-core 128GB
  4. 56-core 256GB
  5. 56-core 512GB
  6. 64-core 192GB
  7. 64-core 384GB
  8. 64-core 768GB
  9. 80-core 96GB
  10. 80-core 192GB
  11. 80-core 384GB
  12. 80-core 768GB
  13. 112-core 256GB
  14. 112-core 512GB
  15. 112-core 1024GB
  16. 80-core 1.5TB
  17. 112-core 1.5TB

The Argon cluster is split between two data centers,

...

Table plus


Node memory (GB)Job slotsMemory (GB) per slot
96402
96801
128562
192405
192643
192802
256564
2561122
384646
384805
512569
5121124
7686412
768809
10241129
15368019
153611213


Using the Basic Job Submission and Advanced Job Submission pages as a reference, how would one submit jobs taking HT into account? For single process high throughput type jobs it probably does not matter, just request one slot per job. For multithreaded or MPI jobs, request one job slot per thread or process. So if your application runs best with 4 threads then request something like the following.

...

Note that if you do not use the above strategy then it is possible that your job process will share cores with other job processes. That may be okay, and preferred for high throughput jobs, but is something to keep in mind. It is especially important to keep this in mind when using the orte parallel environment. There is more discussion on the orte parallel environment on the Advanced Job Submission page. In short, that parallel environment is used in node sharing scenarios, which implies potential core sharing as well. For MPI jobs, that is probably not what you want. As on previous systems, there is a parallel environment (Xcpn, where X is the number of cores per node) for requesting entire nodes. This is especially useful for MPI jobs to ensure the best performance. Note that there are some additional parallel environments that are akin to the smp and Xcpn ones but are specialized for certain software packages. These are

...