[ImageJ-devel] [fiji-devel] Imglib2: using threadpools in core algorithms
Lee Kamentsky
leek at broadinstitute.org
Fri Dec 6 07:27:50 CST 2013
Hi all,
On Fri, Dec 6, 2013 at 3:29 AM, Jean-Yves Tinevez <
jean-yves.tinevez at pasteur.fr> wrote:
> Hi all
>
> Each frame of a movie can be processed concurrently: I generate a task for
> each frame, and can feed this task to any interface we are discussing right
> now. For this, I do not need to know how many threads are allocated to the
> service: it will decide how to process the tasks I generated.
>
> By there are many algorithms in imglib2 that can process a single image in
> a multithreaded way, by splitting it into small pieces. For instance, you
> can compute the Gauss convolution on 2 threads, and it will split the
> source image in 2. For this, the algorithm needs to have some info on the
> "multitasking resources" available right? If you have 24 workers, and that
> each worker receives one frame to segment, the segmenter needs to know it
> is unwise to split the source image over several extra workers. No?
>
> How is this achieved in real world applications?
>
I think we're in a lucky sweet spot where the size of our data is big and
the number of processors is small. I think you want to amortize the cost of
enqueueing the work, thread signaling and thread context switching over a
pretty big chunk of data - and that's operating-system and
processor-specific in my experience, so it's best to benchmark. If I
remember correctly, as an example, Ilastik 0.5 breaks its images into
chunks of 10K pixels and runs perhaps 100 filters on each, plus a
random-forest evaluation. In that case, the large-grain operation makes the
optimization question pretty moot since the computation is so expensive
(and doing the chunking at the top level might be a good design choice).
It would be kind of cool if the service could give you an idea of the cost
of running one future and of the size of the thread pool. This is
notoriously difficult to measure, but a rough cut might be to find the
minimum time from enqueing a future until its execution and multiply that
by 2. If you knew the cost of running your algorithm on N pixels, you could
figure out how to slice it. It would also be kind of cool if your imglib
data container (or some helper class that encapsulated this expertise)
could give you a collection of iterators appropriate for the cost of your
operation and the thread service it would be run on.
> best
> jy
>
> --
> --
> Please avoid top-posting, and please make sure to reply-to-all!
>
> Mailing list web interface: http://groups.google.com/group/fiji-devel
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Fiji-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to fiji-devel+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://imagej.net/pipermail/imagej-devel/attachments/20131206/26ba541a/attachment.html>
More information about the ImageJ-devel
mailing list