#

R-MPI

Introduction

This page is intended to help you with writing and running parallel R codes using the Rmpi package, the MPI interface for R, on the Odyssey cluster. Currently, Rmpi is available with the software moduleR/3.5.1-fasrc03 with OpenMPI ( Fixme: Mvapich2) MPI libraries, compiled with both Intel version and GNU compilers. In order to use the Rmpi package, first you need to load an appropriate set of software modules in your user environment, e.g.,

module load gcc/7.1.0-fasrc01 openmpi/2.1.0-fasrc02 R/3.5.1-fasrc03

Example Rmpi code

Below is an example R code using Rmpi:

# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
library("Rmpi")
}
print(mpi.universe.size())
ns <- mpi.universe.size() - 1
mpi.spawn.Rslaves(nslaves=ns)
#
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
if (is.loaded("mpi_initialize")){
if (mpi.comm.size(1) > 0){
print("Please use mpi.close.Rslaves() to close slaves.")
mpi.close.Rslaves()
}
print("Please use mpi.quit() to quit R")
.Call("mpi_finalize")
}
}
# Tell all slaves to return a message identifying themselves
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size(),system("hostname",intern=T)))

# Test computations
x <- 5
x <- mpi.remote.exec(rnorm, x)
length(x)
x

# Tell all slaves to close down, and exit the program
mpi.close.Rslaves()

If you name this code mpi_test.R, for instance, it is submitted to the queue with the following batch-job submission script:

#!/bin/bash
#SBATCH -J mpi_test
#SBATCH -o mpi_test_%j.out
#SBATCH -e mpi_test_%j.err
#SBATCH -p shared
#SBATCH -n 8
#SBATCH -t 30
#SBATCH --mem-per-cpu=4000
module load gcc/7.1.0-fasrc01 openmpi/2.1.0-fasrc02 R/3.5.1-fasrc03
# Run program using Mvapich2
#srun -n 8 --mpi=pmi2 R CMD BATCH --no-save --no-restore mpi_test.R mpi_test_${SLURM_JOB_ID}.out
# when using OpenMPI, mpirun is needed
mpirun -np 1 --mca mpi_warn_on_fork 0 R CMD BATCH --no-save --no-restore mpi_test1.R mpi_test_${SLURM_JOB_ID}.out

Assuming the batch-job submission script is named mpi_test.run, the job is submitted to the queue by typing in

sbatch mpi_test.run

Upon completion the job output is in the file mpi_test.Rout which has the below contents

> ns <- mpi.universe.size() - 1
> mpi.spawn.Rslaves(nslaves=ns)
7 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 8 is running on: holy7c19316
slave1 (rank 1, comm 1) of size 8 is running on: holy7c19316
slave2 (rank 2, comm 1) of size 8 is running on: holy7c19316
slave3 (rank 3, comm 1) of size 8 is running on: holy7c19316
slave4 (rank 4, comm 1) of size 8 is running on: holy7c19316
slave5 (rank 5, comm 1) of size 8 is running on: holy7c19316
slave6 (rank 6, comm 1) of size 8 is running on: holy7c19316
slave7 (rank 7, comm 1) of size 8 is running on: holy7c19316

Resources

CC BY-NC-SA 4.0 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available at Attribution.