deltah.py Command Help
Command: $SCHRODINGER/jaguar run deltah.py
usage: $SCHRODINGER/jaguar run deltah.py [-h] [-method_sp METHOD_SP]
[-basis_sp BASIS_SP]
[-method_opt METHOD_OPT]
[-basis_opt BASIS_OPT] [-zpe] [-ps]
[-k KEYWORDS] [-opt] [-scalfr SCALFR]
[-multip MULTIP] [-WAIT] [-DEBUG]
[-jobname <name>] [-subdir]
[-recover] [-no_subjob_files]
[-scr <absolute path>]
[-PARALLEL <N>] [-max_threads <T>]
[-procs_per_node <N>]
[-use_one_node | -use_multiple_nodes]
[-HOST <host>:<M>]
[-SUBHOST <host>:<M>] [-SAVE]
[-NOJOBID] [-OPLSDIR <oplsdir>]
infile
Compute delta H of formation at 298K and atomization energies.
positional arguments:
infile Please specify a single .mae file. It can contain multiple structures
options:
-h, --help show this help message and exit
-method_sp METHOD_SP Method for computation of delta H of formation and atomization energy
-basis_sp BASIS_SP Basis for computation of delta H of formation and atomization energy
-method_opt METHOD_OPT
Method to optimize geometry with
-basis_opt BASIS_OPT Basis to optimize geometry with
-zpe Include ZPE in atomization energy and delta H of formation
-ps Run all single-point calculations with the pseudospectral method
-k KEYWORDS, -keyword KEYWORDS, --keyword KEYWORDS
set Jaguar &gen section keywords
-opt Optimize geometry
-scalfr SCALFR Scaling factor for ZPE
-multip MULTIP Multiplicity
other options:
-WAIT Wait for job to finish before returning prompt.
-DEBUG, -D Print detailed information about job launch.
-jobname <name> Set the job name.
-subdir Run Jaguar in a sub-directory.
-recover Manually re-run a job using the recover mechanism. NOTE this option is not recommended for default recovery jobs.
Use "jaguar run <filename>.recover" instead (see documentation for more details).
-no_subjob_files Do not return subjob output files to launch directory.
-scr <absolute path> Specify a scratch directory (must not already exist). Directory must be given as an absolute path.
Note this will be used by the Fortran backend and is independent of the specification of -TMPDIR.
-PARALLEL <N> Use up to <N> CPUs simultaneously for the whole workflow, automatically allocated among subjobs, including threaded subjobs.
-max_threads <T> Use no more than <T> OpenMP threads for each Jaguar subjob. Default 8.
-procs_per_node <N> Use no more than <N> CPUs per node. Default is taken from the schrodinger.hosts file; if undefined 8.
-use_one_node Force CPU resources to be requested upfront on one node.
This pool of CPUs will be used for the duration of the job instead of resubmitting to the queue.
-use_multiple_nodes Force CPU resources to be requested dynamically from the queue (if available) instead of upfront on one node.
commonly used Schrodinger Suite options:
-HOST <host>:<M> Run job remotely on host <hostname>. The optional :<M> defines the maximum number of simultaneous subjobs.
May be combined with -PARALLEL <N>.
-SUBHOST <host>:<M> Run any subjobs remotely on subhost <hostname>. The optional :<M> defines the maximum number of simultaneous subjobs.
May be combined with -PARALLEL <N>.
-SAVE Return .zip file of scratch directory.
-NOJOBID Run Jaguar interactively without jobserver (not available with python workflows).
-OPLSDIR <oplsdir> Use custom FF parameters from specified directory for workflows which support it.