Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Please contact research-computing@uiowa.edu with any questions or for assistance with this.

/nfsscratch_itf

During the last few months Argon system performance on /nfsscratch has at times been poorer than expected.

The HPC team is in the early stages of exploring alternative storage architectures that may provide improved performance and better scaling. The team continues to look for opportunities to optimize and improve the current architecture.

In the meantime, /nfsscratch_itf has been deployed to help relieve some of the I/O on scratch. Please feel free to start using this new file system while we continue to work towards a solution.

Please contact research-computing@uiowa.edu with any questions or for assistance with this.

Local or Shared Scratch?

  • Multiple jobs might be running on your job's node. These jobs can compete for local storage I/O, causing contention independent of /nfsscratch. Only a job with exclusive access to a node can expect the full performance potential of the node's local storage.
  • A parallel job running on multiple nodes typically shouldn't use filesystems local to any of its nodes. Even if you're writing your own MPI instead of using an off-the-shelf application, you can expect better performance if you collate results in memory via message passing and write your result to the shared filesystem. Consider local disk primarily as a structured alternative to swap.
  • If your job places partial results on /localscratch but fails to handle them for any reason (logic error, eviction, crash, etc.), you won't have access to these anywhere else and they will be difficult to recover.
  • As always, please test a few jobs if you are unsure.

...