...
Other factors are also taken into consideration, depending on how the scheduler may be configured. For example, schedulers can prioritize submissions based on how frequently a particular user may be using the system, and will put a slightly higher priority on jobs from users who do not use the system as frequently. This ensures that the system will not be dominated by only a few very active users.
Helium & Neon University of Iowa Cluster Systems
The University of Iowa has two multiple shared clusters available for campus researchers to use. The shared systems are run primarily by ITS-Research Services. Our clusters are capable of running both High Performance jobs and High Throughput jobs. Collectively, the two systems comprise over 550 several hundred compute nodes with more than 6500 total several thousands of processor cores. More detailed information on the clusters is available here: /wiki/spaces/hpcdocs/pages/76514722.
What are the differences between High Throughput Computing (Shared Memory) and High Performance Computing (Distributed Memory)?
...
Our shared clusters run CentOS Linux 5 (Helium) and CentOS Linux 6 (Neon), the version being current at the time of system deployment. In order to make use of the clusters, users will need a basic understanding of how to interact with a Linux system at the command line. At a minimum, you will need to know how to move around the system, copy and edit files. There are many resources on the Internet devoted to helping you learn your way around a Linux system. One of the best resources available is a book called The Linux Command Line, which is available for as a free PDF download here. For a quicker overview of basic Linux commands, there is a good Linux Cheat Sheet here.
...
- Will you need to recompile your code to run on our cluster?
If you are brining code over from another system, you may need to recompile it to work on our systems, especially if you are uisng using MPI (of which we offer a few different varieties). We have some additional notes on compiling here: Compiling Software - What software will your job need, and is it available centrally, or could it be installed in your home directory?
Our list of installed software is here: Software Installations. If you don't see a package you need, please let us know, and if it is broadly applicable to a number of users, we my install it centrally, or we will help install it into your home directory. - Can you estimate how much memory your job will need?
Knowing approximately how many processes you will need or how much memory to request will help ensure you request enough resources to get your job to complete. One way to discover this is to run a small version of the job to see how much memory it uses and then calculate how much it would use if you were to double or triple it in size. We also offer a small sandbox queue on each of our HPC clusters that you may submit small jobs to to see how things go, and then tweak your resource requests accordingly.
...
If your data is not large, the quickest way to get your data onto one of the clusters is to use scp, rsync or sftp from the command line or via an application such as Fetch (Mac based) or IPSwitch (windows based). If your account is on Helium, and If you have larger data sets (larger meaning several Gigabytes or more), then you can utilize our Globus Online connection. For transfers to Neon, you must use one of the other methods listed above.
Storage Options
HPC accounts have a 1TB quota, but there are times when more storage, or a group share might be required for your work. ITS Research Services has made several options available in an attempt to meet these needs.
...