Skip to content, Skip to search


SPIM Workflow Manager For HPC

726 bytes added, 04:01, 12 December 2018
= Usage =
Now you should see the plugin under {{bc | Plugins | Multiview Reconstruction | SPIM Workflow Manager for HPC}}. Upon plugin invocation from the application menu, you are prompted for HEAppE credentials, e-mail address and specifying your working directory. Following a successful login, the main window containing all jobs arranged in a table is displayed. In this context, the term ''job'' is used for a single pipeline run with specified parameters. The plugin actively inquires information on the created jobs from HEAppE and updates the table as appropriate.
For creating a new job, right click in the plugin provides main window and choose ''Create a wizard allowing you new job''. A window with input and output data location will pop up. You have the option to use demonstration data on the Salamon cluster or specify your own input data location. Eventually you may choose your working directory (specified during login) as both your input and output data paths as well as location. Once a new job is configured, you are able to upload your own data (if you chose this option in the previous step) by right clicking on the job line and choosing ''Upload data''. When ''Done'' appears in the Upload column, you can start the job {{bc | right click | Start job}}. Status of your job changes to ''Queued'', then to ''Running'' and finally to ''Finished''. The plugin provides a wizard allowing you to set up a configuration file ''config.yaml'', which effectively characterizes the dataset and defines settings for individual workflow tasks. The plugin supports uploading local input image data to the remote HPC resource, providing information on the progress and estimated remaining time.
Once a job execution is selected by you, the configuration file is sent to the cluster via HEAppE, which is responsible for the job life cycle from this point on. You can display a detailed progress dashboard showing current states of all individual computational tasks for the selected job as well as output logs useful for debugging.