To run R programs on a batch queue system you must specify the BATCH keyword to R.
...
In order to run an R program inside a job script, you must be able to run the program's script from the command line without using the R prompt interactively. R provides the Rscript
command for this purpose, which is available in all R modules on Argon. Therefore, if you would normally process a single data set on your Windows or Unix workstation like this (and save the console output into a file):
Code Block | ||
---|---|---|
| ||
cd path/to/dataSet123
Rscript my/scripts/program.R inputDataSet123.txt > output123.txt |
You could do the same thing on Argon by composing a job script which first loads modules for the version of R you want, but is otherwise the same; for example:
Code Block | ||
---|---|---|
| ||
module load stack/2020.1
module load r/3.6.2_intel-19.0.5.281
cd path/to/dataSet123
Rscript my/scripts/program.R inputDataSet123.txt > output123.txt |
You can easily modify this to make better use of SGE features and take advantage of Argon's scratch filesystems according to HTC Best Practices for better performance. For example, use SGE to temporarily save all output from the entire job script onto /localscratch
, then move the resulting file to your home directory at the end:
Code Block | ||
---|---|---|
| ||
#$ -j y
#$ -o /localscratch
module load stack/2020.1
module load r/3.6.2_intel-19.0.5.281
cd path/to/dataSet123
Rscript my/scripts/program.R inputDataSet123.txt
mv $SGE_STDOUT_PATH . |
Note |
---|
Some tutorials suggest running R scripts with an older convention as "
Therefore we advise against using " |
Running MPI jobs with R, particularly R-snow, requires some special handling. If the MPI processes will all be on a single host then you can essentially start R as normal and spawn MPIĀ ranks. However, if running across multiple nodes, the processes must be spawned by mpirun. This could also be done for the single node case. Since this will have R started by mpirun we need to use a wrapper that will distinguish between the primary and secondary processes and start up the respective R sessions accordingly. This wrapper is called RMPISNOW. The launch command would look something like the following in an SGE job script.
No Format |
---|
mpirun -np # RMPISNOW CMD BATCH --slave sample_script.R |
The slot request for SGE must 1 greater than the number of workers that will be used. Unless more slots are requested for memory the mpirun command will use what is in the SGE environment so the above could be shortened to
No Format |
---|
mpirun RMPISNOW CMD BATCH --slave sample_script.R |
The above will create the snow cluster and that can be referenced within the R script.
The size of the cluster can be gotten with
No Format |
---|
mpi.universe.size() |
so the number of workers would be
No Format |
---|
mpi.universe.size()-1 |
A cluster object can be set just by referencing the cluster set up by mpirun with
No Format |
---|
cl <- getMPIcluster() |
The cluster should be stopped with
No Format |
---|
stopCluster(cl) |
and the R script exited with
No Format |
---|
mpi.quit() |
Rscript
can not be used when using the RMPISNOW wrapper, because it calls R directly, but the command could also be run as
No Format |
---|
mpirun RMPISNOW --slave < sample_script.R |