openfoam
Table of Content
Versions and Availability
h4
h5
Module Names for openfoam on smic
Machine | Version | Module Name |
---|---|---|
smic | 1912 | openfoam/1912/intel-19.0.5-mvapich-2.3.3 |
smic | v9 | openfoam/v9/gcc-9.3.0-mvapich-2.3.3 |
▶ Module FAQ?
The information here is applicable to LSU HPC and LONI systems.
h4
Shells
A user may choose between using /bin/bash and /bin/tcsh. Details about each shell follows.
/bin/bash
System resource file: /etc/profile
When one access the shell, the following user files are read in if they exist (in order):
- ~/.bash_profile (anything sent to STDOUT or STDERR will cause things like rsync to break)
- ~/.bashrc (interactive login only)
- ~/.profile
When a user logs out of an interactive session, the file ~/.bash_logout is executed if it exists.
The default value of the environmental variable, PATH, is set automatically using Modules. See below for more information.
/bin/tcsh
The file ~/.cshrc is used to customize the user's environment if his login shell is /bin/tcsh.
Modules
Modules is a utility which helps users manage the complex business of setting up their shell environment in the face of potentially conflicting application versions and libraries.
Default Setup
When a user logs in, the system looks for a file named .modules in their home directory. This file contains module commands to set up the initial shell environment.
Viewing Available Modules
The command
$ module avail
displays a list of all the modules available. The list will look something like:
--- some stuff deleted --- velvet/1.2.10/INTEL-14.0.2 vmatch/2.2.2 ---------------- /usr/local/packages/Modules/modulefiles/admin ----------------- EasyBuild/1.11.1 GCC/4.9.0 INTEL-140-MPICH/3.1.1 EasyBuild/1.13.0 INTEL/14.0.2 INTEL-140-MVAPICH2/2.0 --- some stuff deleted ---
The module names take the form appname/version/compiler, providing the application name, the version, and information about how it was compiled (if needed).
Managing Modules
Besides avail, there are other basic module commands to use for manipulating the environment. These include:
add/load mod1 mod2 ... modn . . . Add modules rm/unload mod1 mod2 ... modn . . Remove modules switch/swap mod . . . . . . . . . Switch or swap one module for another display/show . . . . . . . . . . List modules loaded in the environment avail . . . . . . . . . . . . . . List available module names whatis mod1 mod2 ... modn . . . . Describe listed modules
The -h option to module will list all available commands.
▶ Did not find the version you want to use??
If a software package you would like to use for your research is not available on a cluster, you can request it to be installed. The software requests are evaluated by the HPC staff on a case-by-case basis. Before you send in a software request, please go through the information below.
h3
Types of request
Depending on how many users need to use the software, software requests are divided into three types, each of which corresponds to the location where the software is installed:
- The user's home directory
- Software packages installed here will be accessible only to the user.
- It is suitable for software packages that will be used by a single user.
- Python, Perl and R modules should be installed here.
- /project
- Software packages installed in /project can be accessed by a group of users.
- It is suitable for software packages that
- Need to be shared by users from the same research group, or
- are bigger than the quota on the home file syste.
- This type of request must be sent by the PI of the research group, who may be asked to apply for a storage allocation.
- /usr/local/packages
- Software packages installed under /usr/local/packages can be accessed by all users.
- It is suitable for software packages that will be used by users from multiple research groups.
- This type of request must be sent by the PI of a research group.
h3
How to request
Please send an email to sys-help@loni.org with the following information:
- Your user name
- The name of cluster where you want to use the requested software
- The name, version and download link of the software
- Specific installation instructions if any (e.g. compiler flags, variants and flavor, etc.)
- Why the software is needed
- Where the software should be installed (locally, /project, or /usr/local/packages) and justification explaining how many users are expected.
Please note that, once the software is installed, testing and validation are users' responsibility.
About the Software
OpenFOAM is a GPL-opensource C++ CFD-toolbox. This offering is supported by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark. OpenCFD Ltd has been developing and releasing OpenFOAM since its debut in 2004.
Usage
Running OpenFOAM through singularity container
Support of using OpenFOAM on LSU and LONI clusters will mainly use the singularity image in the future. OpenFOAM singularity images are currently built under the /home/admin/singularity directory. Example scripts showing how to run OpenFOAM through singularity with MPI support are posted on this GitHub link: https://github.com/lsuhpchelp/singularity/tree/main/recipes/openfoam/10/cavity.of10 (this link uses OpenFOAM10)
Set up your environment to run OpenFOAM
To load OpenFOAM to your environment on clusters using softenv, follow the two steps:
- Add the corresponding softenv key to your .soft file and resoft: you can use the "softenv -k OpenFOAM" command to find out what the keys are;
- The openfoam key makes some changes to the default path which will override the path set by the @default key, please make sure you put openfoam key before the @default key in your .soft file.
To load OpenFOAM to your environment on clusters using module, follow the two steps:
-
Use the
moduel load openfoam/2.3.0/INTEL-140-MVAPICH2-2.0
command to load OpenFOAM to your environment, you can also add this command to your ~/.modules file. You can use the "module av openfoam" command to find out what the module keys are. To query the openfoam module key, usemoduel disp openfoam/2.3.0/INTEL-140-MVAPICH2-2.0
. -
Source the openfoam bashrc file by using the
source $FOAM_BASH
command.
Sample script
#!/bin/bash #PBS -A your_allocation #PBS -q checkpt #PBS -l nodes=1:ppn=16 #PBS -l walltime=12:00:00 #PBS -V #PBS -j oe #PBS -N openfoam_test NPROCS=`wc -l < $PBS_NODEFILE` cd /path/to/your/openfoam/case/dir mpirun -np $NPROCS -machinefile $PBS_NODEFILE {openfoamSolverName} -parallel
The script is then submitted using qsub:
$ qsub job_script
where job_script is the name you gave the script file.
▶ QSub FAQ?
Portable Batch System: qsub
qsub
All HPC@LSU clusters use the Portable Batch System (PBS) for production processing. Jobs are submitted to PBS using the qsub command. A PBS job file is basically a shell script which also contains directives for PBS.
Usage
$ qsub job_script
Where job_script is the name of the file containing the script.
PBS Directives
PBS directives take the form:
#PBS -X value
Where X is one of many single letter options, and value is the desired setting. All PBS directives must appear before any active shell statement.
Example Job Script
#!/bin/bash # # Use "workq" as the job queue, and specify the allocation code. # #PBS -q workq #PBS -A your_allocation_code # # Assuming you want to run 16 processes, and each node supports 4 processes, # you need to ask for a total of 4 nodes. The number of processes per node # will vary from machine to machine, so double-check that your have the right # values before submitting the job. # #PBS -l nodes=4:ppn=4 # # Set the maximum wall-clock time. In this case, 10 minutes. # #PBS -l walltime=00:10:00 # # Specify the name of a file which will receive all standard output, # and merge standard error with standard output. # #PBS -o /scratch/myName/parallel/output #PBS -j oe # # Give the job a name so it can be easily tracked with qstat. # #PBS -N MyParJob # # That is it for PBS instructions. The rest of the file is a shell script. # # PLEASE ADOPT THE EXECUTION SCHEME USED HERE IN YOUR OWN PBS SCRIPTS: # # 1. Copy the necessary files from your home directory to your scratch directory. # 2. Execute in your scratch directory. # 3. Copy any necessary files back to your home directory. # Let's mark the time things get started. date # Set some handy environment variables. export HOME_DIR=/home/$USER/parallel export WORK_DIR=/scratch/myName/parallel # Set a variable that will be used to tell MPI how many processes will be run. # This makes sure MPI gets the same information provided to PBS above. export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'` # Copy the files, jump to WORK_DIR, and execute! The program is named "hydro". cp $HOME_DIR/hydro $WORK_DIR cd $WORK_DIR mpirun -machinefile $PBS_NODEFILE -np $NPROCS $WORK_DIR/hydro # Mark the time processing ends. date # And we're out'a here! exit 0
Resources
Last modified: November 30 2023 22:50:15.