Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Current »

In order to run an R program inside a job script, you must be able to run the program's script from the command line without using the R prompt interactively. R provides the Rscript command for this purpose, which is available in all R modules on Argon. Therefore, if you would normally process a single data set on your Windows or Unix workstation like this (and save the console output into a file):

cd path/to/dataSet123
Rscript my/scripts/program.R inputDataSet123.txt > output123.txt

You could do the same thing on Argon by composing a job script which first loads modules for the version of R you want, but is otherwise the same; for example:

module load stack/2020.1
module load r/3.6.2_intel-19.0.5.281
cd path/to/dataSet123
Rscript my/scripts/program.R inputDataSet123.txt > output123.txt

You can easily modify this to make better use of SGE features and take advantage of Argon's scratch filesystems according to HTC Best Practices for better performance. For example, use SGE to temporarily save all output from the entire job script onto /localscratch, then move the resulting file to your home directory at the end:

#$ -j y
#$ -o /localscratch

module load stack/2020.1
module load r/3.6.2_intel-19.0.5.281
cd path/to/dataSet123
Rscript my/scripts/program.R inputDataSet123.txt
mv $SGE_STDOUT_PATH .

Some tutorials suggest running R scripts with an older convention as "R CMD BATCH program.R", but this has several disadvantages compared to "Rscript program.R" in any situation, and particularly on an HPC system:

  • It doesn't merely interpret the script; instead, it simulates running the script in an interactive session and prints the script's output inline with the script itself. This makes the script's output more difficult to read or parse with some other program later.
  • It doesn't print anything to the display (stdout), so you can't use SGE or redirection (">") to capture and manage the output.
  • It always creates and prints to a file named like "program.Rout" in the directory where you start the script; not necessarily where your script is, or where your input or output data is, or where you prefer. HTC Best Practices describes how this can cause performance problems, but the suggested mitigations are difficult to apply.

Therefore we advise against using "R CMD BATCH".

Running MPI jobs with R, particularly R-snow, requires some special handling. If the MPI processes will all be on a single host then you can essentially start R as normal and spawn MPI slave ranks. However, if running across multiple nodes, the processes must be spawned by mpirun. This could also be done for the single node case. Since this will have R started by mpirun we need to use a wrapper that will distinguish between the master and slave processes and start up the respective R sessions accordingly. This wrapper is called RMPISNOW. The launch command would look something like the following in an SGE job script.

mpirun -np # RMPISNOW CMD BATCH --slave sample_script.R

The slot request for SGE must 1 greater than the number of workers that will be used. Unless more slots are requested for memory the mpirun command will use what is in the SGE environment so the above could be shortened to

mpirun RMPISNOW CMD BATCH --slave sample_script.R

The above will create the snow cluster and that can be referenced within the R script.

The size of the cluster can be gotten with

mpi.universe.size()

so the number of workers would be

mpi.universe.size()-1

A cluster object can be set just by referencing the cluster set up by mpirun with

cl <- getMPIcluster()

The cluster should be stopped with

stopCluster(cl)

and the R script exited with

mpi.quit()

Rscript can not be used when using the RMPISNOW wrapper, because it calls R directly, but the command could also be run as

mpirun RMPISNOW --slave < sample_script.R
  • No labels