Setting Up Queuing Systems for Open MPI Parallel Execution

Some programs run with Schrödinger software (Piper and Quantum ESPRESSO) can use Open MPI for parallel execution, and can operate with a number of queuing systems. Open MPI provides tight integration that is compatible with many queuing systems via the PLS (Process Launch Subsystem) and RAS (Resource Allocation Subsystem) components. Loose integration, in which the queuing system is only responsible for allocating resources and dispatching the jobs, is also possible.

Instructions and requirements for the supported queuing systems are listed in the topics linked below.

Note: The queues that are set up using the instructions below should only be used for jobs that run under MPI. They should not be used for distributed computing jobs, such as distributed Glide, LigPrep, and Prime jobs.

Open MPI can create large temporary files, which are written in the location defined by TMPDIR, TEMP, TMP, with a fallback to /tmp. To avoid performance problems, you should ensure that these files are written to a local file system with sufficient space, by setting one of these environment variables in the hosts file. For example,

env: TEMP=/mylocaldisk
env: SCHRODINGER_MPIRUN_OPTIONS=-x TEMP

For efficient queuing, it may be necessary to specify the number of processors (cores) per node as well as the total number of processors available. The Qargs setting in the hosts file can define these values with two basic variables:

  • %NPROC%—the total number of CPU cores requested for the job
  • %PPN%—the number of CPU cores (processors) per node

The value of %NPROC% is obtained from the command used to launch the job; whereas %PPN% is obtained from the processors_per_node setting in the hosts file.

General arithmetic is supported with these variables. For example,

%NPROC/2%
%NPROC/2+1%

Integer division is rounded down, so %NPROC/8% evaluates to 0 if %NPROC% is less than 8. Instructions and examples are given in the sections below for each queueing system.