OpenMPI Configuration for Torque
The TM API is used to allocate slots and launch processes in Open MPI for all of the family of PBS queuing systems. For more information of running jobs under Torque, check the OpenMPI FAQs at http://www.open-mpi.org/faq/?category=tm.
Due to a naming conflict, components for Torque have been placed under $SCHRODINGER/mmshare-vversion/lib/arch/openmpi/disabled_lib/openmpi. To use the Torque queuing system, you should copy those components to the standard location:
cd $SCHRODINGER/mmshare-vversion/lib/arch/openmpi/disabled_lib/openmpi/ cp -rf mca_plm_tm.so mca_ess_tm.so mca_ras_tm.so \
$SCHRODINGER/mmshare-vversion/lib/arch/openmpi/lib/openmpi
Note: If you are currently running multiple queuing systems from the same installation you may need to create two installations, one with these changes, and one without them.
The bundled Torque components depend on the libtorque.so.2 library from Torque 2.2.1. If you do not have a compatible libtorque.so.2 library on your system, you may also need to copy it:
cd $SCHRODINGER/mmshare-vversion/lib/arch/openmpi/disabled_lib/ cp libtorque.so.2 $SCHRODINGER/mmshare-vversion/lib/arch/openmpi/lib
Here is an example of a hosts file entry:
# Request %NPROC% processors on a single node Name: label Queue: Torque Qargs: -q Torque-queue -l nodes=1:ppn=%NPROC% Host: submit-node Processors: processors-in-queue