SPIM Workflow Manager For HPC

Revision as of 05:41, 28 August 2018 by Jana (talk | contribs)
SPIM Workflow Manager for HPC (Fiji)
Author Jan Kožusznik, Petr Bainar, Jana Klímová, Michal Krumnikl, Pavel Moravec, Václav Svatoň, Pavel Tomancak
Maintainer jan.kozusznik@vsb.cz
Source [1]
Initial release August 2018
Latest version August 2018
Category Transform, Registration, Deconvolution


Introduction

Imaging techniques have emerged as a crucial means of understanding the structure and function of living organisms in primary research, as well as medical diagnostics. In order to maximize information gain, achieving as high spatial and temporal resolution as practically possible is desired. However, long-term time-lapse recordings at the single-cell level produce vast amounts of multidimensional image data, which cannot be processed on a personal computer in a timely manner, therefore requiring utilization of high-performance computing (HPC) clusters. For example, processing a 2.2 TB dataset of drosophila embryonic development, taking a week on a single computer, was brought down to 13 hours by employing an HPC cluster supporting parallel execution of individual tasks [2]. Unfortunately, life scientists often lack access to such infrastructure.

Addressing this issue is particularly challenging as Fiji is an extraordinarily extensible platform and new plugins emerge incessantly. So far, plugin developers have typically implemented task parallelization within a particular plugin, but no universal approach has yet been incorporated into the SciJava architecture. Here we propose the concept of integrating parallelization support into one of the SciJava libraries, thereby enabling developers to access remote resources (e.g., remote HPC infrastructure) and delegate plugin-specific tasks to its compute nodes. As the cluster-specific details are hidden in respective interface implementations, the plugins can remain extensible and technology-agnostic. In addition, the proposed solution is highly scalable, meaning that any additional resources can be efficiently utilized.

Description

Instalation

Usage

HPC Cluster

Execution of the Snakemake pipeline from the implemented Fiji plugin was tested on the Salomon supercomputer, which consists of 1\,008 compute nodes, each of which is equipped with 2$\times$12-core Intel Haswell processors and 128\,GB RAM, providing a total of 24\,192 compute cores of x86-64 architecture and 129\,TB RAM. Furthermore, 432 nodes are accelerated by two Intel Xeon Phi 7110P accelerators with 16\,GB RAM each, providing additional 52\,704 cores and 15\,TB RAM. The total theoretical peak performance reaches 2\,000 TFLOPS. The system runs a Red Hat Linux.

The pipeline was tested on a dataset used in experiments run on the Madmax cluster at MPI-CBG~\citep{Schmied2016}. The Madmax cluster had 44 nodes with two Intel Xeon E5-2640, 2.5 GHz CPUs with 6 cores each (average CPU PassMark 9\,498). In comparison, Salomon nodes are equipped with two Intel Xeon E5-2680v3, 2.5 GHz CPU with 12 cores each (average CPU PassMark 18\,626). Salomon is running a newer generation of Xeon processors (Haswell) providing double the performance of the Sandy Bridge architecture used on Madmax.

Citation

Please note that the plugin SPIM Workflow Manager for HPC available through Fiji is based on a publication. If you use it successfully for your research please be so kind to cite our work: