SPIM Workflow Manager For HPC

Revision as of 03:07, 29 August 2018 by Jana (talk | contribs) (HEAppE middleware)
Revision as of 03:07, 29 August 2018 by Jana (talk | contribs) (HEAppE middleware)
SPIM Workflow Manager for HPC (Fiji)
Author Jan Kožusznik, Petr Bainar, Jana Klímová, Michal Krumnikl, Pavel Moravec, Václav Svatoň, Pavel Tomancak
Maintainer jan.kozusznik@vsb.cz
Source [1]
Initial release August 2018
Latest version August 2018
Category Transform, Registration, Deconvolution


Contents

Introduction

Imaging techniques have emerged as a crucial means of understanding the structure and function of living organisms in primary research, as well as medical diagnostics. In order to maximize information gain, achieving as high spatial and temporal resolution as practically possible is desired. However, long-term time-lapse recordings at the single-cell level produce vast amounts of multidimensional image data, which cannot be processed on a personal computer in a timely manner, therefore requiring utilization of high-performance computing (HPC) clusters. For example, processing a 2.2 TB dataset of drosophila embryonic development, taking a week on a single computer, was brought down to 13 hours by employing an HPC cluster supporting parallel execution of individual tasks [2]. Unfortunately, life scientists often lack access to such infrastructure.

Addressing this issue is particularly challenging as Fiji is an extraordinarily extensible platform and new plugins emerge incessantly. So far, plugin developers have typically implemented task parallelization within a particular plugin, but no universal approach has yet been incorporated into the SciJava architecture. Here we propose the concept of integrating parallelization support into one of the SciJava libraries, thereby enabling developers to access remote resources (e.g., remote HPC infrastructure) and delegate plugin-specific tasks to its compute nodes. As the cluster-specific details are hidden in respective interface implementations, the plugins can remain extensible and technology-agnostic. In addition, the proposed solution is highly scalable, meaning that any additional resources can be efficiently utilized.

Description

SPIM data processing pipeline

HEAppE middleware

Accessing a remote HPC cluster is often burdened by administrative overhead due to more or less complex security policies enforced by HPC centers. This barrier can be substantially lowered by employing a middleware tool based on the HPC-as-a-Service concept. To facilitate access to HPC from the Fiji environment, we utilize an in-house HEAppE Middleware framework [3] allowing end users to access an HPC system through web services and remotely execute pre-defined tasks. Furthermore, HEAppE is designed to be universal and applicable to various HPC architectures.

We developed a Fiji plugin underlain by HEAppE, which enables users to steer workflows running on a remote HPC resource. As a representative workflow we use a Snakemake based SPIM data processing pipeline operating on large image datasets. The Snakemake workflow engine resolves dependencies between subsequent steps and executes in parallel any tasks appearing to be independent, such as processing of individual time points of a time-lapse acquisition.

Instalation

Usage

HPC Cluster

Execution of the Snakemake pipeline from the implemented Fiji plugin was tested on the Salomon supercomputer, which consists of 1 008 compute nodes, each of which is equipped with 2x12-core Intel Haswell processors and 128 GB RAM, providing a total of 24 192 compute cores of x86-64 architecture and 129 TB RAM. Furthermore, 432 nodes are accelerated by two Intel Xeon Phi 7110P accelerators with 16 GB RAM each, providing additional 52 704 cores and 15 TB RAM. The total theoretical peak performance reaches 2 000 TFLOPS. The system runs a Red Hat Linux.

The pipeline was tested on a dataset used in experiments run on the Madmax cluster at MPI-CBG [4]. The Madmax cluster had 44 nodes with two Intel Xeon E5-2640, 2.5 GHz CPUs with 6 cores each (average CPU PassMark 9 498). In comparison, Salomon nodes are equipped with two Intel Xeon E5-2680v3, 2.5 GHz CPU with 12 cores each (average CPU PassMark 18 626). Salomon is running a newer generation of Xeon processors (Haswell) providing double the performance of the Sandy Bridge architecture used on Madmax.

Using the developed plugin, we executed the pipeline on the Salomon supercomputer at IT4Innovations in Ostrava, Czech Republic. As the test data set we used 90 time-point SPIM acquisition of a Drosophila melanogaster embryo expressing FlyFos fluorescent GFP fusion reporter for the nrv2 gene. The embryo was imaged with Lightsheet Z.1 SPIM microscope (Carl Zeiss Microscopy) from 5 views every 15 minutes from cellular blastoderm stage until late stages of fruitfly embryogenesis. The data transfer and pipeline execution on Salomon using 90 nodes took 6 hours 37 minutes. For comparison, processing of the same dataset on a common PC took 44 hours and 8 minutes.

Citation

Please note that the plugin SPIM Workflow Manager for HPC available through Fiji is based on a publication. If you use it successfully for your research please be so kind to cite our work: