namd

About

NAMD is a parallel, object-oriented, molecular dynamics code designed for high-performance simulation of large biomolecular systems.

Versions and Availability

▶ Display Softenv Keys for namd on all clusters
Machine Version Softenv Key
supermike2 2.9 +NAMD-2.9-Intel-13.0.0-openmpi-1.6.2
supermike2 2.9 +NAMD-2.9-Intel-13.0.0-openmpi-1.6.2-CUDA-4.2.9
▶ Softenv FAQ?

The information here is applicable to LSU HPC and LONI systems.

Softenv

SoftEnv is a utility that is supposed to help users manage complex user environments with potentially conflicting application versions and libraries.

System Default Path

When a user logs in, the system /etc/profile or /etc/csh.cshrc (depending on login shell, and mirrored from csm:/cfmroot/etc/profile) calls /usr/local/packages/softenv-1.6.2/bin/use.softenv.sh to set up the default path via the SoftEnv database.

SoftEnv looks for a user's ~/.soft file and updates the variables and paths accordingly.

Viewing Available Packages

Using the softenv command, a user may view the list of available packages. Currently, it can not be ensured that the packages shown are actually available or working on the particular machine. Every attempt is made to present an identical environment on all of the LONI clusters, but sometimes this is not the case.

Example,

$ softenv
These are the macros available:
*   @default
These are the keywords explicitly available:
+amber-8                       Applications: 'Amber', version: 8 Amber is a
+apache-ant-1.6.5              Ant, Java based XML make system version: 1.6.
+charm-5.9                     Applications: 'Charm++', version: 5.9 Charm++
+default                       this is the default environment...nukes /etc/
+essl-4.2                      Libraries: 'ESSL', version: 4.2 ESSL is a sta
+gaussian-03                   Applications: 'Gaussian', version: 03 Gaussia
....
Listing of Available Packages

See Packages Available via SoftEnv on LSU HPC and LONI.

For a more accurate, up to date list, use the softenv command.

Caveats

Currently there are some caveats to using this tool.

  1. packages might be out of sync between what is listed and what is actually available
  2. resoft and soft utilities are not; to update the environment for now, log out and login after modifying the ~/.soft file.
Availability

softenv is available on all LSU HPC and LONI clusters to all users in both interactive login sessions (i.e., just logging into the machine) and the batch environment created by the PBS job scheduler on Linux clusters and by loadleveler on AIX clusters..

Packages Availability

This information can be viewed using the softenv command:

% softenv
Managing Environment with SoftEnv

The file ~/.soft in the user's home directory is where the different packages are managed. Add the +keyword into your .soft file. For instance, ff one wants to add the Amber Molecular Dynamics package into their environment, the end of the .soft file should look like this:

+amber-8

@default

To update the environment after modifying this file, one simply uses the resoft command:

% resoft
▶ Display Module Names for namd on all clusters.
Machine Version Module
qb 2.10b1 namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-
qb 2.9 namd/2.9/INTEL-14.0.2-ibverbs
smic 2.10 namd/2.10/INTEL-14.0.2-ibverbs
smic 2.10 namd/2.10/INTEL-14.0.2-ibverbs-mic
▶ **FIX-ME** FAQ?

Usage

Depending on which cluster it is installed, NAMD may or may not need MPI to run.

Non-MPI

On SuperMIC, use "charmrun" to run NAMD. Below is a sample script which runs NAMD with 4 nodes (80 CPU cores and 8 Xeon Phi co-processors):

#!/bin/bash

#PBS -A hpc_smictest3
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=20
#PBS -q checkpt

cd $PBS_O_WORKDIR
module add namd/2.10/INTEL-14.0.2-ibverbs-mic

for node in `cat $PBS_NODEFILE | uniq`; do echo host $node; done > hostfile

charmrun ++p 80 ++nodelist ./hostfile ++remote-shell ssh `which namd2` apoa1.namd

	

MPI

Use "mpirun" to run NAMD (e.g. on QB2). Below is a sample script which runs NAMD with 4 nodes (80 CPU cores):

#!/bin/bash

#PBS -A your_allocation_name
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=20
#PBS -q checkpt

cd $PBS_O_WORKDIR
module add namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-2.0

mpirun -n 80 -f $PBS_NODEFILE `which namd2` apoa1.namd
	

On Super Mike 2, first make sure that the proper keys are loaded in .soft file:

+fftw-3.3.3-Intel-13.0.0-openmpi-1.6.2
+NAMD-2.9-Intel-13.0.0-openmpi-1.6.2
	

Then run NAMD using scripts similar to this one:

#!/bin/bash

#PBS -A hpc_your_allocation
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=16
#PBS -q checkpt

cd $PBS_O_WORKDIR

mpirun -n 64 -hostfile $PBS_NODEFILE `which namd2` apoa1.namd
	

GPU

To run NAMD with GPU support (e.g. on QB2), please use the below script as a reference, the example data and detailed instructions can be downloaded from the namd tutorial titled "GPU Accelerated Molecular Dynamics Simulation, Visualization, and Analysis" from here.

#PBS -A your_allocation_name
#PBS -l walltime=2:00:00
#PBS -l nodes=4:ppn=20
#PBS -q checkpt

cd $PBS_O_WORKDIR
module add namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-2.0

nprocs=`wc -l $PBS_NODEFILE | awk '{print $1}'`
mpirun -n $nprocs -f $PBS_NODEFILE /usr/local/packages/namd/2.10b1/CUDA-65-INTEL-140-MVAPICH2-2.0/namd2 apoa1.namd
	

Resources

Last modified: March 03 2017 16:07:19.