Architecture Overview
This Architecture Overview describes the components of the new Job Server infrastructure. Job Server enables users to submit scientific workflows (jobs) to local and remote resources, ranging from laptops and workstations to high-performance computing clusters. For instructions how to set up Job Server, refer to the Administrator Guide.
Note
In a nutshell, the Job Server infrastructure handles a lot of things in the background when a user clicks on Run in Maestro, the primary graphical user interface of the Schrödinger Suite. An application of the Schrödinger Suite typically needs an input file that controls the application, as well as one or more structure files. These input files need to be copied to a remote cluster and the resources need to be negotiated with a job scheduling system. Finally, the results need to be copied back and imported into Maestro. More complicated workflows can consist of many steps that depend on the results of the previous steps.
Job Server supports the following features:
- All data transfer is encrypted via TLS. All processes are authenticated via server- and client-side certificates.
- All components of the Job Server infrastructure communicate on two configurable ports.
- Job metadata is stored in a relational database, supporting a large number of jobs per user.
The most notable differences compared to the legacy Job Control system are:
- Users do not require SSH access to any execution host on an HPC cluster. Users only need to pass an authentication challenge once to obtain a client certificate that is used to authenticate all subsequent connections.
-
After submitting a job, users can disconnect or shut down their client system (laptop or workstation) without disrupting running remote jobs. Output files can be downloaded by the user after the job is completed.
- Job Server runs as a dedicated server process outside of the control of regular users.