Skip to content, Skip to search

Changes

SPIM Workflow Manager For HPC

42 bytes added, 06:44, 28 August 2018
HPC Cluster
= HPC Cluster =
Execution of the Snakemake pipeline from the implemented Fiji plugin was tested on the Salomon supercomputer, which consists of 1\,008 compute nodes, each of which is equipped with 2$\times$122x12-core Intel Haswell processors and 128\,GB RAM, providing a total of 24\,192 compute cores of x86-64 architecture and 129\,TB RAM. Furthermore, 432 nodes are accelerated by two Intel Xeon Phi 7110P accelerators with 16\,GB RAM each, providing additional 52\,704 cores and 15\,TB RAM. The total theoretical peak performance reaches 2\,000 TFLOPS. The system runs a Red Hat Linux.
The pipeline was tested on a dataset used in experiments run on the Madmax cluster at MPI-CBG~\citep{Schmied2016}[https://imagej.net/Automated_workflow_for_parallel_Multiview_Reconstruction]. The Madmax cluster had 44 nodes with two Intel Xeon E5-2640, 2.5 GHz CPUs with 6 cores each (average CPU PassMark 9\,498). In comparison, Salomon nodes are equipped with two Intel Xeon E5-2680v3, 2.5 GHz CPU with 12 cores each (average CPU PassMark 18\,626). Salomon is running a newer generation of Xeon processors (Haswell) providing double the performance of the Sandy Bridge architecture used on Madmax.
= Citation =
Please note that the plugin SPIM Workflow Manager for HPC available through Fiji is based on a publication. If you use it successfully for your research please be so kind to cite our work:
88
edits