NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM and X-PLOR.
NAMD is distributed free of charge with source code. You can build NAMD yourself or download binaries for a wide variety of platforms. Tutorials on the software website show you how to use NAMD and VMD for biomolecular modelling.
More information: NAMD homepage (external site)
Before you begin
Your Pawsey user account is required to be a member of the
namd group in LDAP to be able to access the module.
NAMD is licensed software. Read the NAMD licensing agreement (external site) , and then let us know that you agree to the licence. To give your written confirmation either send an email to email@example.com or open a ticket using the User Support Portal.
NAMD (at least version 2.10) is compiled to support SMP (Shared-Memory and Network-Based Parallelism). To achieve optimal performance you must be careful when using the options provided to both
srun and NAMD itself. Running NAMD using MPI parallelism alone can result in very poor performance and you might need to experiment with hybrid OpenMP/MPI options.
For more information, refer to Shared-Memory and Network-Based Parallelism (external site).
How to run NAMD on Setonix
To run NAMD on Setonix, both the GNU Programming Environment and namd modules must be loaded.
$ module load namd/2.14
The NAMD executable is
namd2, and multinode NAMD jobs on Setonix require the
+ofi_runtime_tcp option to run successfully.
Example: Slurm batch scripts
A problem with a modest number of atoms (say 50,000) can be run in the following way.
srun to specify 16 MPI tasks, distributed between 8 nodes (2 MPI tasks per node, 1 per socket). Each MPI task is associated with 1 socket (--cpu-bind=socket).
Note that the following arguments to NAMD itself are essential for optimal performance:
+ppn 63 +pemap 1-63,65-127 +commap 0,64. They match the arguments specified to
srun in that they specify 2 MPI tasks to run on cores 0 and 64 (
+commap 0,64) on each node, and 63 working threads placed appropriately in each socket.
Another example assigns one task per NUMA domain, with 8 tasks per node and 16 threads per task, for a total of 32 tasks across 4 nodes.