Skip to content, Skip to search

Changes

SPIM Workflow Manager For HPC

45 bytes added, 08:32, 31 October 2018
HPC Cluster
= HPC Cluster =
Execution of the ''Snakemake'' pipeline from the implemented Fiji plugin was tested on the [https://docs.it4i.cz/salomon/introduction/ Salomon ] supercomputer, which consists of 1 008 compute nodes, each of which is equipped with 2x12-core Intel Haswell processors and 128 GB RAM, providing a total of 24 192 compute cores of x86-64 architecture and 129 TB RAM. Furthermore, 432 nodes are accelerated by two Intel Xeon Phi 7110P accelerators with 16 GB RAM each, providing additional 52 704 cores and 15 TB RAM. The total theoretical peak performance reaches 2 000 TFLOPS. The system runs a Red Hat Linux.
The pipeline was tested on a dataset used in experiments run on the Madmax cluster at MPI-CBG [https://imagej.net/Automated_workflow_for_parallel_Multiview_Reconstruction]. The Madmax cluster had 44 nodes with two Intel Xeon E5-2640, 2.5 GHz CPUs with 6 cores each (average CPU PassMark 9 498). In comparison, Salomon nodes are equipped with two Intel Xeon E5-2680v3, 2.5 GHz CPU with 12 cores each (average CPU PassMark 18 626). Salomon is running a newer generation of Xeon processors (Haswell) providing double the performance of the Sandy Bridge architecture used on Madmax.
88
edits