...
Info |
---|
A scratch filesystem is a place to store intermediate job data which can be destroyed when a job is finished. Performance is better than using your home directory or an LSS share which are meant for long term data storage. |
Note |
---|
As ofJuly 1stFebruary 3,20162023, files on the cluster-wide /nfsscratch filesystem are subject to deletion6040 days after they were created. Policy for node-specific /localscratch filesystems is independent of this. |
User Scratch Space
Each compute node has its own local scratch filesystem. Users may read from and write to this using their own exclusive directory at /localscratch/Users/<HawkID>.
...
Scratch filesystems are a shared resource available for the convenience of all users. Therefore, files on these filesystems are subject to deletion after a certain lifespan as determined by the HPC policy committee. Home account storage and purchased storage are not subject to this policy.
/localscratch
login nodes
On /localscratch, the allowed file lifespan is 30 days after the file was last accessed, where each file's age is the time elapsed since its access timestamp ("atime"). An automated cleanup process runs periodically on each node to delete files whose atime has reached the maximum lifespan.
compute nodes
Cleaning of /localscratch on compute nodes is done on an opportunistic basis, cleaning when no jobs are on the node. However, if the space becomes limited, the node will go into an alarm state and will then be cleaned.
If your job writes data to /localscratch, please retrieve everything you need and remove unneeded files as the last part of the job, because it's difficult to access that same compute node after a job exits! A compute node can become unavailable if its /localscratch filesystem becomes too full. If that happens, all files will be removed from /localscratch without considering lifespan in order to restore the compute node to service. For more information on using localscratch seeĀ Advanced Job Submission#localscratch.
/nfsscratch
On /nfsscratch, the allowed file lifespan is 60 40 days after first being written, where each file's age is the time elapsed since its creation timestamp ("crtime") which is tracked on the fileserver. An automated cleanup process will run periodically on the server to delete files whose crtime has reached the maximum lifespan. This space is provided by a single ZFS storage server connected via NFS. It is best suited to large streaming I/O, reading or writing large data sequentially to or from a small number of files.
Note |
---|
Altering or duplicating files solely to circumvent the scratch cleanup process is against policy. Please make legitimate use of scratch filesystems, then move your intermediate and final results to stable storage in accordance with policy. |
...
- Incorporate the transfer into your job script before or after computation.
- Move data using our data transfer server, data.hpc.uiowa.edu. You can log in with your Argon HawkID credentials and connect to LSS and Argon infrastructure. Only /nfsscratch and /scratch are accessible on the data transfer server, /localscratch of compute nodes is not.
- Use the Research Data Collaboration Service which provides a web-based GUI to manage data transfers.
...