High throughput jobs , that is, computing work which consists of many small jobs, can put a strain on the home directory (NFS) are high volume computing jobs whose typical computing duration is similar to or even shorter than the time required to receive, schedule, start, and stop them. In addition to the general considerations for high volume jobs, high throughput jobs can strain the capabilities of Argon's home directory servers. In some cases, this results in failed jobs for the among all high throughput jobs on the system, and slow performance for other users.
...
Home directories are stored on separate file servers and connected via network to login and compute nodes using NFS. This design works well for most casescluster usage, but when many compute nodes simultaneously try to createwrite, read, or delete files in home directories, access becomes slow for all users and some jobs fail. The job failure errors typically state either that a file or directory does not exist or that access to a file or directory was denied, even though the file/directory does exist and the user has the appropriate permissions to access it; for . For example:
Info | ||
---|---|---|
| ||
failed changing into working directory:05/02/2012 01:02:47 [21548:26444]: |
Some high throughput workloads can generate enough traffic to significantly increase likelihood of this occurrence depending on other user job activity, or even cause this condition by themselves. Also, by hypothesis any "high throughput" workload has a relatively higher incidence of its own jobs failing when the condition occurs.
...
Other cases involve reading or writing many files (sometimes thousands) at the start or end of each job, or even leaving many files open inside each job so that they are cleaned up suddenly at the same time when the function (or entire program, or entire job) ends. A high throughput workload containing many such jobs (sometimes thousands) can multiply these effects to problematic proportions very easily.
...