AMBER

AMBER is a package of molecular simulation programs which includes source code and demos.

AMBER is not installed for general use on Bridges, but if you have your own license, you can install it yourself.  We may provide help installing it. Many users are using AMBER on the GPUs nodes this way.

Sample scripts for AMBER use are available in directory /opt/packages/examples/amber on Bridges.

Documentation

Installing AMBER 

General installation instructions are given in the Reference Manual available from the AMBER web site.

Explicit instructions to install Amber successfully on Bridges are given below.  These instructions can also be found on Bridges in directory /opt/packages/examples/amber.

Extract the source code

First you must extract the AMBER code into a directory of your choosing.  You can name the directory anything you like. You may want to choose the directory name to explicitly reflect the options used (CPU vs. GPU, MPI-enabled or not, which compiler) .

When you have downloaded the Amber tar file, use the following commands to extract the source code. Substitute your user name for username, and use any directory name you choose for new-amber-directory.

tar xvfj AmberTools18.tar.bz2
tar xvfj Amber18.tar.bz2
mv amber18/ /pylon5/username/new-amber-directory
export AMBERHOME=/pylon5/username/new-amber-directory
cd $AMBERHOME

Create your executable

Follow the instructions below for the type of executable you need.

 

Serial CPU installation

To run either serial or parallel codes on CPUs, first you must do the serial CPU installation.

Compile AMBER for serial CPU use by following these steps.  Here we are using the intel compilers.

Load this or the latest python module (see available modules by typing module avail python)

module load  python3/intel_3.6.3

When you type module list you should get:

Currently Loaded Modulefiles:
psc_path/1.1    2) slurm/default   3) intel/17.4      4) xdusage/2.0-4 python3/intel_3.6.3

 Then type these commands:

./configure intel
source /pylon5/username/new-amber-directory/amber.sh
make install
 

Test your installation to be sure it is correct. Type:

make test

You should see something like this:

# Summary of test suite:
177 file comparisons passed
1 file comparisons failed       # looks like precision error; very small difference between expected and received
0 tests experienced errors
Test log file saved as /apps/amber18_cpu_intel/logs/test_amber_serial/2018-07-05_18-51-46.log
Test diffs file saved as /apps/amber18_cpu_intel/logs/test_amber_serial/2018-07-05_18-51-46.diff
# Summary of AmberTools serial tests:
2201 file comparisons passed

The following errors can be ignored, but check to make sure in your particular case:

3 file comparisons failed       # looks like slight storage amount differences; also completely different output received for one file...
5 tests experienced errors
Test log file saved as /apps/amber18_cpu_intel/logs/test_at_serial/2018-07-05_17-50-41.log
Test diffs file saved as /apps/amber18_cpu_intel/logs/test_at_serial/2018-07-05_17-50-41.diff

Parallel CPU installation

To run either serial or parallel codes on CPUs, first you must do the serial CPU installation. When that is completed, follow these steps to create an executable for parallel jobs.

Compile AMBER for CPU use with MPI by following these steps.  Here we are using the intel compilers.

./configure -intelmpi intel
make install

Test your installation to be sure it is correct.

xport DO_PARALLEL="mpirun -np 8"
make test

You should see something like this:

 # Summary of test suite:
 249 file comparisons passed
 2 file comparisons failed
0 tests experienced an error
Test log file saved as /apps/amber18_cpu_intel/logs/test_amber_parallel/2018-07-05_21-28-37.log
Test diffs file saved as /apps/amber18_cpu_intel/logs/test_amber_parallel/2018-07-05_21-28-37.diff
# Summary of AmberTools parallel tests:
1010 file comparisons passed
30 file comparisons failed
23 tests experienced errors
Test log file saved as /apps/amber18_cpu_intel/logs/test_at_parallel/2018-07-05_21-10-13.log
Test diffs file saved as /apps/amber18_cpu_intel/logs/test_at_parallel/2018-07-05_21-10-13.diff

Serial GPU installation

To run either serial or parallel codes on GPUs, first you must do the serial GPU installation.

To run GPU-accelerated AMBER code, install AMBER in an interactive GPU session with the EGRESS option. so that it can access the internet for updates.

First, request an interactive GPU session:

interact -gpu —egress

Once your interact session has started, compile the serial gpu code, using gnu compilers:

export AMBERHOME=/pylon5/username/new-amber-directory
module load cuda
module list

This is a check that the cuda module is loaded. The output should be similar to:

Currently Loaded Modulefiles:
 1) psc_path/1.1 2) slurm/default 3) intel/17.4 4) cuda/8.0

Then, create the executable:

./configure -cuda gnu
source /pylon5/username/new-amber-directory/amber.sh
make install

Test your installation to be sure it is correct.

make test
Summary of test suite:
204 file comparisons passed
0 file comparisons failed
0 tests experienced errors
Test log file saved as /apps/amber18_gpu_gnu/logs/test_amber_cuda/2018-07-06_17-06-10.log
No test diffs to save!

Parallel GPU installation

To run either serial or parallel codes on GPUs, first you must do the serial GPU installation. When that is completed, follow these steps to create an executable for parallel jobs.

If you are not already in an interactive GPU session, start one:

interact -gpu —egress

Once your interact session has started, compile the gpu code, using gnu compilers:

./configure -cuda -mpi gnu
source /pylon5/username/new-amber-directory/amber.sh
make install

Running AMBER

When you have installed AMBER in your defined $AMBERHOME directory, you can run jobs using it, by creating batch jobs and submitting them to the GPU or GPU-shared partitions.  See the Running jobs section of the Bridges User Guide for more information on creating and submitting batch jobs.

 

 

User Information

Passwords
Connect to PSC systems:
Policies
Technical questions:

Send mail to remarks@psc.edu or call the PSC hotline: 412-268-6350.