MPICH is a collaboratively developed open-source implementation of MPI distributed under the BSD license. MPICH is based on the MPI-1 standard.

Note: MPICH is provided for backwards compatibility. Any project developing new software should consider using MPICH2 or MVAPICH2.

Versions and Availability

Softenv Keys for mpich on supermike2
Machine Version Softenv Key
supermike2 3.0.2 +mpich-3.0.2-Intel-13.0.0
▶ Softenv FAQ?

The information here is applicable to LSU HPC and LONI systems.


SoftEnv is a utility that is supposed to help users manage complex user environments with potentially conflicting application versions and libraries.

System Default Path

When a user logs in, the system /etc/profile or /etc/csh.cshrc (depending on login shell, and mirrored from csm:/cfmroot/etc/profile) calls /usr/local/packages/softenv-1.6.2/bin/ to set up the default path via the SoftEnv database.

SoftEnv looks for a user's ~/.soft file and updates the variables and paths accordingly.

Viewing Available Packages

Using the softenv command, a user may view the list of available packages. Currently, it can not be ensured that the packages shown are actually available or working on the particular machine. Every attempt is made to present an identical environment on all of the LONI clusters, but sometimes this is not the case.


$ softenv
These are the macros available:
*   @default
These are the keywords explicitly available:
+amber-8                       Applications: 'Amber', version: 8 Amber is a
+apache-ant-1.6.5              Ant, Java based XML make system version: 1.6.
+charm-5.9                     Applications: 'Charm++', version: 5.9 Charm++
+default                       this is the default environment...nukes /etc/
+essl-4.2                      Libraries: 'ESSL', version: 4.2 ESSL is a sta
+gaussian-03                   Applications: 'Gaussian', version: 03 Gaussia
Listing of Available Packages

See Packages Available via SoftEnv on LSU HPC and LONI.

For a more accurate, up to date list, use the softenv command.


Currently there are some caveats to using this tool.

  1. packages might be out of sync between what is listed and what is actually available
  2. resoft and soft utilities are not; to update the environment for now, log out and login after modifying the ~/.soft file.

softenv is available on all LSU HPC and LONI clusters to all users in both interactive login sessions (i.e., just logging into the machine) and the batch environment created by the PBS job scheduler on Linux clusters and by loadleveler on AIX clusters..

Packages Availability

This information can be viewed using the softenv command:

% softenv
Managing Environment with SoftEnv

The file ~/.soft in the user's home directory is where the different packages are managed. Add the +keyword into your .soft file. For instance, ff one wants to add the Amber Molecular Dynamics package into their environment, the end of the .soft file should look like this:



To update the environment after modifying this file, one simply uses the resoft command:

% resoft


  1. Set up your .soft file to select the library version, and the compilers you want to use for building and executing your code. Keep in mind that keys take effect in the order they appear. The following shows how to select the MPICH library and use it with the Intel C compiler. Do not simply copy them, as they are subject to change. Use the softenv command to verify them before use.
  2. +mpich-1.2.7-intel-11.1
  3. The compiler, mpicc, will use icc and link with mpich with no further ado.
  4. Run with: mpirun -machinefile $PBS_NODEFILE -np $NPROCS /path/to/executable
  5. An example PBS script can be viewed below.
  6. ▶ Open Example?
    # No shell commands until PBS setup is completed!
    # Provide your allocation code.
    # "workq" is the default job queue.
    #PBS -q workq
    # Set to your email address.
    # PPN should be 4, 8, 16, 20, etc., depending on machine/queue you are using.
    #PBS -l nodes=1:ppn=4
    # Set amount of time job may run in hh:mm:ss
    #PBS -l walltime=00:10:00
    # Have PBS pass all shell variables to the job environment
    #PBS -V
    # Send stdout and stderr to named files.
    #PBS -o MPI_test.out
    #PBS -e MPI_test.err
    # Give the job a name to make tracking it easier
    #PBS -N MPI_test 
    # Shell commands may begin here.
    # Your executable should either be in your path, or defined explicitly.
    # Here we'll assume a custom program named "hello" that exists in the
    # work directory:
    export EXEC=hello
    export WORK_DIR=/work/uname/path
    export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`
    cd $WORK_DIR 
    # The order in which options are provided is important:
    mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/$EXEC 


  • Documentation site. You will have to use MPICH2 documentation and pay attention only to those features support by MPI-1.

Last modified: November 11 2014 16:58:42.