openmpi
Table of Content
Versions and Availability
About the Software
An open source Message Passing Interface implementation.
Usage
- On clusters using softenv, set up your .soft file to select the library version, and
compilers you want to use for building your code. Keep in mind
the softenv keys take effect in the order they appear.
The following shows how to select an OpenMPI library and use it with the GNU gcc compiler.+openmpi-1.3.4-gcc-4.3.2 +gcc-4.3.2 @default
On clusters using module, set up your ~/.modules file to load the openmpi library version you want to use.
moduel load openmpi/1.8.1/INTEL-14.0.2
Do not simply copy them, as they are subject to change. Use the softenv or module av command on clusters to verify them before use. - The mpi compiler, mpicc or mpif90, will use the associated compiler and link with openmpi with no further ado.
- Run with:
mpirun -machinefile $PBS_NODEFILE -np $NPROCS /path/to/executable
- An example PBS script can be viewed below.
▶ Open Example?
This is an example of running a program compiled with the OpenMPI library.
#!/bin/bash # No shell commands before PBS is set up. # # "workq" is the default job queue. #PBS -q workq # # Set the appropriate project allocation code #PBS -A ALLOCATION_CODE # # Set number of nodes and number of processors on each node # to be used. See cluster user guide for corresponding ppn number #PBS -l nodes=4:ppn=16 # # Set time job is allowed to run in hh:mm:ss #PBS -l walltime=00:15:00 # # Send stdout to a named file #PBS -o OUT_NAME # # Merge stderr messages with stdout #PBS -j oe # # Give the job a name for easier tracking #PBS -N JOB_NAME # # Shell commands may begin here export WORK_DIR=/work/$USER/path cd $WORK_DIR export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'` mpirun -machinefile $PBS_NODEFILE -np $NPROCS /path/to/your_executable
▶ QSub FAQ?
Portable Batch System: qsub
qsub
All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.
Usage
$ qsub job_script
Where job_script is the name of the file containing the script.
PBS Directives
PBS directives take the form:
#PBS -X value
Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.
Example Job Script
#!/bin/bash # # Use "workq" as the job queue, and specify the allocation code. # #PBS -q workq #PBS -A your_allocation_code # # Assuming you want to run 16 processes, and each node supports 4 processes, # you need to ask for a total of 4 nodes. The number of processes per node # will vary from machine to machine, so double-check that your have the right # values before submitting the job. # #PBS -l nodes=4:ppn=4 # # Set the maximum wall-clock time. In this case, 10 minutes. # #PBS -l walltime=00:10:00 # # Specify the name of a file which will receive all standard output, # and merge standard error with standard output. # #PBS -o /scratch/myName/parallel/output #PBS -j oe # # Give the job a name so it can be easily tracked with qstat. # #PBS -N MyParJob # # That is it for PBS instructions. The rest of the file is a shell script. # # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # # 1. Copy the necessary files from your home directory to your scratch directory. # 2. Execute in your scratch directory. # 3. Copy any necessary files back to your home directory. # Let's mark the time things get started. date # Set some handy environment variables. export HOME_DIR=/home/$USER/parallel export WORK_DIR=/scratch/myName/parallel # Set a variable that will be used to tell MPI how many processes will be run. # This makes sure MPI gets the same information provided to PBS above. export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'` # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro". cp $HOME_DIR/hydro $WORK_DIR cd $WORK_DIR mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro # Mark the time processing ends. date # And we're out'a here! exit 0
Resources
Last modified: September 10 2020 11:58:50.