amber

Versions and Availability

About the Software

Amber is a suite of biomolecular simulation programs together with Amber tools. A manual download is required for Ambers. Spack will search your current directory for the download files. Alternatively, add the files to a mirror so that Spack can find them. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html Note: Only certain versions of ambertools are compatible with amber. Only the latter version of ambertools for each amber version is supported.

Usage

Make sure Module keys are matched up with corresponding versions of the compiler and MPI library. For instance on the SuperMike2, SuperMIC or Queenbee2:

module load amber/18/INTEL-170-MVAPICH2-2.2

MPI

Note: the usual executable name used is pmemd (serial, not recommended) or pmemd.MPI (parallel).

pmemd and pmemd.MPI in Amber 18 was built with Intel 17.0.0 and mvapich2 2.2. The Module key "amber/18/INTEL-170-MVAPICH2-2.2" will load corresponding versions of the compiler and MPI library as dependencies. Other versions of Intel compiler and mpi compilers should be removed from the Module list before loading the Amber 18 module key.

On SuperMike2, SuperMIC and QB2, use "pmemd.MPI" to run Amber. Below is a sample script which runs Amber with 2 nodes (40 CPU cores):

	#!/bin/bash
	#PBS -A my_allocation
	#PBS -q checkpt
	#PBS -l nodes=2:ppn=20
	#PBS -l walltime=HH:MM:SS
	#PBS -j oe
	#PBS -N JOB_NAME
	#PBS -V

	cd $PBS_O_WORKDIR
	mpirun -np 40 $AMBERHOME/bin/pmemd.MPI -O -i mdin.CPU -o mdout -p prmtop -c inpcrd
    

GPU acceleration

Note: the usual executable name used for Amber 16 and Amber 18 GPU acceleration is pmemd.cuda (serial) or pmemd.cuda.MPI (parallel).

pmemd.cuda and pmemd.cuda.MPI in Amber 16 was built with Intel 15.0.0 and CUDA 7.5. Please load Intel 15.0.0 compiler and CUDA 7.5 into your user environment in order to run pmemd.cuda.

pmemd.cuda and pmemd.cuda.MPI in Amber 18 was built with Intel 17.0.0, mvapich2 2.2 and CUDA 9. The Module key "amber/18/INTEL-170-MVAPICH2-2.2" will load these dependencies. Other versions of Intel compiler, mpi compilers and cuda should be removed from the Module list before loading the Amber 18 module key.

Please do not attempt to run regular GPU MD runs across multiple nodes. Infiniband is way too slow these days to keep up with the computation speed of the GPUs.

Using hybrid or v100 queue is required if running gpu simulation with Amber 18 on SuperMIC.

On SuperMIC and QB2, use "pmemd.cuda" to run Amber 16 with GPU acceleration in serial. Below is a sample script which runs Amber 16 on 1 node:

		#!/bin/bash
		#PBS -A my_allocation
		#PBS -q hybrid
		#PBS -l nodes=1:ppn=20
		#PBS -l walltime=HH:MM:SS
		#PBS -j oe
		#PBS -N JOB_NAME
		#PBS -V

		cd $PBS_O_WORKDIR
		$AMBERHOME/bin/pmemd.cuda -O -i mdin.GPU -o mdout_gpu -p prmtop -c inpcrd
    

GPU acceleration must use hybrid or v100 node if using SuperMIC. Note pmemd.cuda is a serial program, so no parallel exe such as mpirun is required, or set mpirun -np 1

On QB2 , as each compute node has two GPUs, "pmemd.cuda.MPI" can be used to run Amber 16 with GPU acceleration in parallel. Below is a sample script which runs Amber 16 on 1 node (2 GPUs) on QB2:

  		#!/bin/bash
  		#PBS -A my_allocation
  		#PBS -q hybrid
  		#PBS -l nodes=1:ppn=20
  		#PBS -l walltime=HH:MM:SS
  		#PBS -j oe
  		#PBS -N JOB_NAME
  		#PBS -V

  		cd $PBS_O_WORKDIR
		mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i mdin.GPU -o mdout_2gpu -p prmtop -c inpcrd -ref inpcrd
      

Use -np # where # is the number of GPUs you are requesting, NOT the number of CPUs. Note pmemd.cuda.MPI is significantly faster than pmemd.cuda only for the production run of a large model.

Resources

  • The Amber Home Page has a variety of on-line resources available, including manuals and tutorials.

On QB2 , as each compute node has two GPUs, "pmemd.cuda.MPI" can be used to run Amber 16 with GPU acceleration in parallel. Below is a sample script which runs Amber 16 on 1 node (2 GPUs) on QB2:

  		#!/bin/bash
  		#PBS -A my_allocation
  		#PBS -q hybrid
  		#PBS -l nodes=1:ppn=20
  		#PBS -l walltime=HH:MM:SS
  		#PBS -j oe
  		#PBS -N JOB_NAME
  		#PBS -V

  		cd $PBS_O_WORKDIR
		mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i mdin.GPU -o mdout_2gpu -p prmtop -c inpcrd -ref inpcrd
      

Use -np # where # is the number of GPUs you are requesting, NOT the number of CPUs. Note pmemd.cuda.MPI is significantly faster than pmemd.cuda only for the production run of a large model.

Resources

  • The Amber Home Page has a variety of on-line resources available, including manuals and tutorials.

Last modified: September 10 2020 11:58:50.