...
Unix attributes were recently added to the campus Active Directory Service and Argon will be making use of those. One of those attributes is the default shell. This can be set via the following HawkID tool: Set Login Shell - Conch. Most people will want the shell set to /bin/bash
. For reference, previous generation UI HPC systems set the shell to /bin/bash
. We have observed that some people's shells are set to /bin/tcsh
, which may not be what you want. We have no way of knowing whether those received the incorrect default or were changed via self-service. We recommend that you check your shell setting via the Set Login Shell - Conch tool and set it as desired before logging in. Note that changes to the shell setting may take up to 24 hours to become effective on Argon.
Queues and Policies
The University of Iowa (UI) queue
A significant portion of the HPC cluster systems at UI were funded centrally. These nodes are put into a queue named, 'UI'. There are also additional queues for special purposes.
- UI-HM→ High memory nodes
- UI-MPI → MPI jobs
- UI-GPU → Contains nodes with GPU accelerators
- UI-DEVELOP → Meant for small, short running job prototypes and debugging
These queues are available to everyone who has an account on an HPC system. Since that is a fairly large user base there are limits placed on these shared queues.
Centrally funded nodes | Neon |
---|---|
Description | UI: (74) 16-core, 64-GB 11 with Xeon Phi cards UI-HM: (6) 24-core, 512-GB |
UI queue limit | 2 running jobs per user |
UI-HM queue limit | 1 running job per user |
UI-MPI | 8 hour wall clock limit 1 running job per user minimum slot request of 56 |
UI-GPU | 1 running job per user |
UI-DEVELOP | 24 hour wall clock limit 1 running job per user |
Note that the number of slots available in the UI queue can vary depending on whether anyone has purchased a reservation of nodes. The UI queue is the default queue and will be used if no queue is specified. This queue is available to everyone who has an account on a UI HPC cluster system.
Info |
---|
Please use the UI-DEVELOP queue for testing new jobs at a smaller scale before committing many nodes to your job. |
In addition to the above, the HPC systems have some nodes that are not part of any investor queue. These are in the all.q queue and are used for node rentals and future purchases. The number of nodes for this purpose varies.
Anchor | ||||
---|---|---|---|---|
|
Finally, there is clusterwide queue called the all.q queue. This queue encompasses all of the nodes and contains all of the available job slots. It is available to everyone with an account and there are no running job limits. However, it is a low priority queue on the same nodes as the higher priority investor and UI queues. The all.q queue is subordinate to these other queues and jobs running in it will give up the nodes they are running on when jobs in the high priority queues need them. The term we use for this is "job eviction". Jobs running in the all.q queue are the only ones subject to this.