Job Settings Dialog Box
The Job Settings dialog box is used to specify settings that are used when starting jobs from Maestro. Settings are stored for each application (with a few exceptions).
To open this dialog box, click the Job Settings button in the Job Toolbar,
or click the arrow next to this button and choose Job Settings.
- Using
- Features
- Additional Resources
Using the Job Settings Dialog Box
For jobs that can be distributed over multiple CPUs (and multiple hosts), the job can be split into subjobs. The number of subjobs can be larger than the number of processors used, and it is often a good idea to choose a number that is several times the number of processors for optimal load balancing. The number of subjobs should not in general be smaller than the number of processors: if it is, it may be reset to the total number of processors. See Running Distributed Schrödinger Jobs for more information. Some job drivers optimize the number of subjobs, so the number actually used might not be exactly the number given.
Job Settings Dialog Box Features
The features present in the Job Settings dialog box depend on the calculation to be carried out. Each instance of this dialog box has a Job section and a set of action buttons. If the job produces structural output and properties, the dialog box can also have an Output section. Other, job-specific controls are made available as needed. The controls that are common to many instances of the Job Settings dialog box are described below.
- Output section
- Job/Master job/Subjob section
- Title text box
- Name text box
- Processing unit option menu
- Host option menu
- CPU Host or GPU Host option menu
- GPU subhost option menu
- CPUs text box
- Total N processors or GPUs text box
- Maximum # processors text box
- Limit number of concurrent subjobs option and Max text box
- Distribute subjobs across options and text boxes
- Scratch directory option menu
- Host list table
- Separate job into options
- Separate into N subjobs text box
- Action buttons
Output section
The Output section is not always present. When it is, it can contain the following features:
Output section
This section provides options for handling the output of the job.
- Incorporate option menu
-
Choose the manner in which the structural and property results of the calculation are incorporated into the project. The menu can have one or more of the following options:
- Append new entries as a new group
-
Each structure in the output file is added to the project as a new entry, and the entries are grouped. The group name is set to the name of the file from which the entries were read, minus the extension. This is the default option.
- Append new entries in place
-
Each new entry is added to the project immediately below its source entry, i.e. the entry that was used as input for the job. If there is no source entry, the new entry is added to the end of the entry list in the project as an individual entry.
- Replace existing entries
-
Any entries that served as input for the job are replaced with the new structures returned from the calculation. This option is most useful when the job simply produces properties that are added to the input structure, or where there is a single output structure for each input structure.
- Do not incorporate
-
No change is made to the project when the job is complete. This option is always present on the menu.
The choice that you make from this menu is persistent for a given job type: the next time you run a job of that type, the incorporation mode that you last used is the default mode. The incorporation mode is stored as a preference by Maestro, so the choice persists across Maestro sessions.
- Output format options
-
Choose the format of the output file. The availability of the formats may depend on the choice made from the Incorporate option menu.
The choice that you make from this menu is persistent for a given job type: the next time you run a job of that type, the incorporation mode that you last used is the default mode. The incorporation mode is stored as a preference by Maestro, so the choice persists across Maestro sessions.
Some instances have other features in this section, such as a text box and Browse button for specifying a directory.
Job/Master job/Subjob section
For a given instance of the dialog box, the controls in this section are chosen according to the requirements of the job. Jobs that cannot be distributed usually only have a hosts option menu. Jobs that can be divided into a specified number of independent subjobs and distributed over multiple hosts have a text box for the number of subjobs, and a host list for the hosts; whereas jobs that can be run in parallel or can only be distributed on a single host have a host option menu and a CPUs text box.
Some jobs have a master job that runs on one host and subjobs that run on another host. These jobs have a Master job and a Subjob section, where the hosts and other relevant information are specified.
As terminology has changed with the introduction of multicore CPUs, in the descriptions below, both "processor" and "CPU" are taken to mean a single processing unit, such as a core.
- Title text box
-
Specify the title to be used in the input structure file. The default is the entry title.
- Name text box
-
Enter a name for the job. Job names cannot contain spaces or nonprinting characters.
When a job is started, a subdirectory of the working directory is created using the job name (depending on the application), for writing job files in. Job files are named with the job name as the first part (stem or prefix) of the file names.
The initial name shown is a standard name for the application, which might include calculation settings. A standard name that contains settings is updated when the settings change. You can modify a standard name; if you do and it still retains some settings, the name is updated if the settings change. You can also replace the entire job name.
When a standard name or modified standard name is used, the job name is made unique by appending an integer. This is done by checking for job directories or files in the current working directory. However, if you replace the job name to create a custom job name, the name might no longer be unique, and it is not automatically made unique. In that case, a warning is posted before any files are overwritten.
After a job is submitted, a new job name is automatically created for the next job from the current job name, by appending an integer or incrementing the integer. This is done for custom job names as well as standard or modified names.
- Processing unit option menu
-
Choose the type of processing unit to use, from CPU or GPU. The items on the Host option menu are filtered to show only those that have the chosen type of processing unit available. The label after the Total text box shows the processing unit type. This option menu is only available for jobs that can be run on GPUs. For jobs or job stages that can only run on GPUs, only GPU items are shown on the Host option menu
- Host option menu
- CPU Host or GPU Host or Driver/GPU host option menu
-
The Host menu displays all the hosts defined in the $SCHRODINGER/schrodinger.hosts file, with the number of CPUs or GPUs available on the host in parentheses. To run the selected job on a remote host, choose the host from this menu. The list is filtered if you have chosen a type of processing unit, to show only those with CPUs or those with GPUs.
For jobs that run the master job on a CPU and the subjobs on a GPU, this option menu is labeled CPU Host or GPU Host, depending on which section the option menu is in. Jobs where the choice of processing unit is made in the panel for the job rather than in this dialog box are labeled CPU Host or GPU Host, according to the choice made.
If a host has more than one GPU, the GPUs are assigned automatically by the scheduler. You cannot assign individual GPUs to run a job.
The host localhost means the host on which you are running Maestro. If you run a job locally, Maestro automatically reduces the priority of the job so that it does not compete with Maestro for resources. The exceptions are Hydrophobic/philic map jobs, and structure cleanup jobs. To change this behavior, set the
SCHRODINGER_NICEenvironment variable (see Job Control Environment Variables (JOB CONTROL IS DEPRECATED)).Jobs that run multiple stages can have a Host option menu for each stage. The settings on each menu may be tailored to the application run at that stage.
- GPU subhost option menu
-
If a CPU is chosen as the Driver/GPU host to run the main job, use this option menu to specify the GPU host on which to run subjobs. Only available if a CPU is selected as the Driver/GPU host.
- CPUs text box
- Total N processors or GPUs text box
- Maximum # processors text box
-
Specify the maximum number of processing units (CPUs or GPUs) to use to run the job. The maximum number is limited by the number of processing units on the host you choose. The number actually used is limited by the number of licenses available for the type of job, and for some jobs might vary during the course of the job. For some Materials Science applications, the number is applied for each structure, so the box is labeled Total N processors per structureor Total N GPUs per structure. For Jaguar, this is the maximum number of processors available for OpenMP parallel execution across all subjobs (not the number per subjob).
This feature is absent if the job cannot be distributed or run in parallel.
- Limit number of concurrent subjobs option and Max text box
-
Limit the number of subjobs that can be run at the same time to the number specified in the Max text box. The number cannot be more than the number of processors specified. If the option is not selected, subjobs may run concurrently on the available processors. This option applies to Jaguar jobs. If the number is less than the number of processors, the subjobs may be able to run with multiple OpenMP threads.
- Distribute subjobs across options and text boxes
-
Choose an option for how to distribute the subjobs for parallel execution with OpenMP threads. These options apply to some Materials Science jobs that use Jaguar and to Quantum ESPRESSO jobs.
- CPUs option and text box—Specify only the number of processors to use. The number of threads is determined automatically for some job types, while for others, the subjobs are distributed between the processors, one per processor.
- Maximum simultaneous serial subjobs text box—Specify the maximum number of subjobs that can be run at any one time; the subjobs only use a single processor. Available only from the QM Multistage Workflow Panel.
- Threads and Maximum simultaneous subjobs text boxes—Specify the number of threads per process to use, and the maximum number of subjobs that can be run at any one time; each subjob uses the number of threads specified in the Threads text box. Available for Quantum ESPRESSO jobs.
- Threads and subjobs option and text boxes—Specify the number of threads per process to use, and the maximum number of subjobs to run simultaneously. The total number of processors requested is the product of these numbers, as each subjob runs with the specified number of threads. When this option is selected, text boxes for the numbers are displayed, and the total number of CPUs that result from the values in the text boxes is reported below.
- QE manual parallel options option and text box—Manually specify parallel options for Quantum ESPRESSO. The supported options are
-nimage,-npools,-ntg, and-nband. If this option is not selected, the job is automatically parallelized. For more information on parallelization in Quantum ESPRESSO, see Chapter 3 of the Quantum ESPRESSO User's Guide.
- Scratch directory option menu
-
Select the scratch directory (used for temporary storage during the running of the job). The available scratch directories are taken from the tmpdir settings in the $SCHRODINGER/schrodinger.hosts file.
- Host list table
-
The Host list table is available for applications that can distribute jobs over multiple hosts. The table displays all the hosts defined in the $SCHRODINGER/schrodinger.hosts file, with the number of processors available on the host in the Processors column. The Use column specifies the number of processors to use from the given host. The default number is the number available, or * if it is a queue host. You can edit this column to set the number of processors, and you can reset the values to the default by clicking Reset All.
To specify the hosts, select the host rows in the table. To set the number of processors on the host, edit the values in the Use column as needed. The total number of processors for the job is reported in the Total to use text box (which is noneditable).
When you select hosts for the job, you can only select one queue host, and you cannot select both queue and non-queue hosts. If you select a queue host, all other table rows are deselected.
To reset the Host list table to the default values (use all processors on the local host), click Reset All.
- Separate job into options
-
These options are available for Glide and WScore docking jobs, and provide several options for dividing a job into subjobs.
- Recommended number of subjobs—Automatically determine an optimal subjob size for the type of job.
- Exactly N subjobs—Split the job into exactly the number of subjobs specified. (Not available for WScore.)
- Subjobs with no more than N ligands each—Split the job into subjobs with no more than the specified number of structures in each subjob.
- Separate into N subjobs text box
-
For jobs that can be distributed over multiple CPUs (and multiple hosts), specify the number of subjobs to split the job into. Some job drivers optimize the number of subjobs, so the number actually used might not be exactly the number given. For VSW, there is also an Adjust option that adjusts the number of jobs so that the number of structures per subjob falls in an optimum range.
Action buttons
There are four actions you can take after making settings, by clicking one of these buttons:
- Write—Save the settings for this job type and write the input files, but do not start the current job. If a job or stage requires a GPU host and none are available, this button is enabled, and the job files are written with a dummy GPU host.
- Run—Save the settings for this job type, and start the current job. If a job requires a GPU host but no GPU host is available in the hosts file, this button is disabled.
- OK—Save the settings for this job type, but do not start the job.
- Cancel—Discard the changes made to the settings.
The Write button is only available for some applications.