...
- 40-core 96GB
- 40-core 192GB
- 56-core 128GB
- 56-core 256GB
- 56-core 512GB
- 64-core 192GB
- 64-core 384GB
- 64-core 768GB
- 80-core 96GB
- 80-core 192GB
- 80-core 384GB
- 80-core 768GB
- 112-core 256GB
- 112-core 512GB
- 112-core 1024GB
- 80-core 1.5TB
- 112-core 1.5TB
The Argon cluster is split between two data centers,
...
Table plus | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Using the Basic Job Submission and Advanced Job Submission pages as a reference, how would one submit jobs taking HT into account? For single process high throughput type jobs it probably does not matter, just request one slot per job. For multithreaded or MPI jobs, request one job slot per thread or process. So if your application runs best with 4 threads then request something like the following.
...
Note that if you do not use the above strategy then it is possible that your job process will share cores with other job processes. That may be okay, and preferred for high throughput jobs, but is something to keep in mind. It is especially important to keep this in mind when using the orte
parallel environment. There is more discussion on the orte
parallel environment on the Advanced Job Submission page. In short, that parallel environment is used in node sharing scenarios, which implies potential core sharing as well. For MPI jobs, that is probably not what you want. As on previous systems, there is a parallel environment (Xcpn, where X is the number of cores per node) for requesting entire nodes. This is especially useful for MPI jobs to ensure the best performance. Note that there are some additional parallel environments that are akin to the smp and Xcpn ones but are specialized for certain software packages. These are
...