Skip to content, Skip to search

Changes

SPIM Workflow Manager For HPC

1 byte added, 05:13, 31 August 2018
SPIM data processing pipeline
Pipeline input parameters are entered by a user into a ''config.yaml'' configuration file. In the first step, the .czi raw data are concurrently resaved into the HDF5 container in parallel on the cluster. Similarly, the individual time points are registered in parallel using fluorescent beads as fiduciary markers on the cluster. Subsequently, a non-parallel job executed by ''Snakemake'' consolidate the registration XML files into a single one, followed by time-lapse registration using the beads segmented during the spatial registration step. After this, the pipeline diverge into either parallel content-based fusion or parallel multi-view deconvolution. To achieve this divergence in practice, the ''Snakemake'' pipeline is launched from the Fiji plugin as two separate jobs using two different ''config.yaml'' files set to execute content-based fusion and deconvolution respectively. In the final stage of the pipeline, the fusion/deconvolution output is saved into a new HDF5 container. Figure below shows results of registration, fusion and deconvolution in different time points.
[[File:Drosophila.PNG|500px1000px]]
= Instalation =
88
edits