Hi Lee,<br><div class="gmail_quote"><br>
<div class="gmail_quote"><div class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#ffffff">
I'm running into a number of problems with overlays that are caused
by the fact that the EuclideanSpace is now defined on the Dataset
and not on the Display used to composite datasets and overlays.
There are more than a few issues:<br></div></blockquote></div><div><br>Over the past few weeks, and during the Dresden hackathon, I finally found time to pursue a lot of these issues, and have made
changes to the codebase to improve the data/display hierarchy.<br><ul><li>The ImageDisplay interface now implements CalibratedInterval, which extends not only EuclideanSpace but also Interval. Further, unlike Data objects, ImageDisplay now also implements PositionableByAxis (which extends Localizable and Positionable). So an ImageDisplay always has a current position in the N-dimensional structure, and can report what that is.</li>
<li>Data objects (Datasets and Overlays) now implement CalibratedInterval, but not PositionableByAxis, since it makes no sense to ask a Data object what its current position is.</li><li>However, the DataView object, which wraps a Data and provides visualization-specific metadata about how that Data is currently being visualized, *does* implement PositionableByAxis.</li>
<li>An ImageDisplay is still a collection of DataViews as before, but uses the CombinedInterval class to combine the N-dimensional spaces of its constituent views into a single aggregate space.<br></li></ul>Hopefully these changes are along the lines of what we discussed when I visited Broad all those months ago.<br>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#ffffff">
<ul><li>Trac issue ImageJ 558 - the user takes a Z-stack, multichannel
image and thresholds it to get a BinaryMaskOverlay and
associated ROI. The ROI is properly defined in a 4-dimensional
space. However, when displayed, a single X/Y plane should be
sampled and displayed, but there is no mechanism to retrieve the
plane being displayed or the color display mode. The code
iterates through the entire stack and that's what causes it to
update slowly.</li></ul></div></blockquote></div><div><br>I haven't had time to examine this issue in much detail, but the fix you implemented long ago seems fine for the time being.<br><br></div><div class="im">
<blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff">
<ul><li>Datasets have axes which capture the metadata of what a
particular dimension means. Overlays don't have that.<br></li></ul></div></blockquote></div><div><br>As mentioned above, things have now been unified so that all Data objects (including both Datasets and Overlays) are N-dimensional with axes, by extending the CalibratedInterval interface, which in turn extends Interval and CalibratedSpace (which extends EuclideanSpace).<br>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff"><ul><li>Channels are really categorical, not ordinal - there's no
reason the red channel is zero, the green is one and the blue is
two and, with a channel-stack, the DAPI channel in one image
might be the PI channel in a second. You can properly relate one
channel to the other through metadata, but overlays don't have
that.</li></ul></div></blockquote></div><div><br>I think channels are sometimes categorical, and sometimes ordinal. For example, a hyperspectral dataset that provides 32 channels ranging from 400 nm through 550 nm sampled every 5 nm would be a continuous domain, similar to other dimensions. But if you have two channels—one transmitted channel captured using a grayscale camera, and another channel detected from fluorescence caused by laser scanning—those are definitely categorical.<br>
<br>Still, ultimately, they become ordinal in that you must provide an index for each channel. That doesn't mean you should assume continuity though, of course.<br><br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff"><ul><li>You might want to composite datasets - take two images and
place them adjacently. What happens if they have different axes?
What happens if they have different channel layouts?</li></ul></div></blockquote></div><div><br>In general, some math needs to happen. The new <a href="http://fiji.sc/cgi-bin/gitweb.cgi?p=imagej2/.git;a=blob;f=core/data/src/main/java/imagej/data/CombinedInterval.java;h=24aec4a1305a28aaead11709733ef96e677a1672;hb=refs/heads/svn/trunk" target="_blank">CombinedInterval</a> class handles the union of multiple CalibratedInterval objects, fairly naively at the moment, but with the potential to improve the logic as you suggest in the future. We are also planning to tap into Unidata's <a href="https://github.com/ctrueden/ucar-units" target="_blank">Ucar Units</a> project to assist with this.<br>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff">
So, what I'm thinking is the following:<br>
<ul><li>A display represents the EuclideanSpace that's used to
composite overlays and datasets. Think of the display as a big
N-dimensional container and the overlays and datasets float
inside that container.<br>
</li><ul><li>The display has a set of axes that are the union of all of
the axes that appear in the view data objects.</li><li>There are two behaviors for a view object that does not have
an axis in the space. Perhaps the user selects the behavior?:</li><ul><li>The value in space outside of a single plane defined by a
coordinate location along the axis is nan or null.</li><li>The value in space along the missing axis is the same at
all locations along that axis.</li></ul></ul></ul></div></blockquote></div><div><br>Yes, this is as we discussed at Broad a few months back. It is really working now! :-)<br><br>Currently, if a constituent DataView lacks an axis from the aggregate space, the value for that axis can be specified explicitly, and defaults to 0 otherwise. So, for example, all Overlays currently have the X and Y axes, and no other axes. Embedding a rectangle overlay in a 4D dataset (with, say, XYZT) causes that rectangle to be visible at Z=0, T=0, and no other Z and T positions, unless the DataView.setPosition(long, AxisType) method is used to override a different position for that dimension.<br>
<br>In the future, we could enable the user to specify an alternative behavior as you suggest, such that the overlay becomes visible at *all* positions of that axis. But for now it is always a constant value.<br></div><div class="im">
<div>
<br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff"><ul><li>A display window holds the information about what plane is
being displayed and how the channels are composited.</li><ul><li>The views are asked to display according to the plane
information. A plane is an interval where the min=max for all
dimensions but X and Y. The views could construct appropriate
projectors based on the interval. The display window should
also control the channel compositing information and the views
should reference the display window's compositing information
instead of maintaining their own copies.<br></li></ul></ul></div></blockquote></div><div><br>Right, as things stand now, the ImageDisplay tracks its current position (i.e., it implements PositionableByAxis) and the DisplayPanel provides the GUI that actually presents the information onscreen. However, one vital remaining task is to finish decoupling these two classes. Currently there is still some unfortunate coupling between ImageDisplay, ImageCanvas, DisplayPanel and DisplayWindow, which needs to change. The ImageDisplay and DataViews should be completely UI-agnostic, with the active user interface components subscribing to display events and updating themselves accordingly. This will help eliminate some problematic sections of code, particularly SwingDatasetView and SwingOverlayView, which are discouraged to use in ImageJ plugins due to their Swing-specific nature.<br>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff"><ul><li>Perhaps there is a SpatialDataObject or perhaps the DataObject
is always spatial. In any case, the axis information and channel
metadata should be pushed up the inheritance hierarchy so that
Overlay has a first-class representation of that and operations
that generate overlays from datasets copy the information to the
overlays they create.</li></ul></div></blockquote></div><div><br>We went the route of "data objects are always spatial." And yeah, a lot of functionality was pushed up into AbstractData, AbstractDataView and AbstractImageDisplay. Hopefully this will reduce the amount of dataset- and overlay-specific code throughout the system.<br>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#ffffff">
<ul><li>There is a spectral dimension, but the channel axis doesn't
capture that. Channels are categorical and should have names so
they can be matched in the same way that axes are matched.</li></ul></div></blockquote></div><div><br>It is a good point that we should add some support for categorical dimensions in general. It is not enough to have calibrations; it should be possible to label each axis position individually. This is useful for a variety of reasons. We will need to revisit this idea later, when time permits.<br>
<br></div><div class="im"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff">
It's some work to refactor these things, but as the code base grows
and becomes more entrenched it will become increasingly difficult to
refactor, so we should do it sooner rather than later. I don't think
this is much coding at this point and if you look at the code, there
are places where this organization clarifies some things.<span><font color="#888888"></font></span></div></blockquote></div></div><br>The coding work has been nontrivial, but certainly doable. There is more left to do as well, but I think we are in reasonable shape.<br>
<br>Thanks very much for your comments! I really appreciate how much thought and effort you have put into the ImageJ design over the past year.<br><br>Regards,<br>Curtis<br><br><br><div class="gmail_quote"><div><div class="h5">
On Fri, Jun 3, 2011 at 1:44 PM, Lee Kamentsky <span dir="ltr"><<a href="mailto:leek@broadinstitute.org" target="_blank">leek@broadinstitute.org</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><u></u>
<div text="#000000" bgcolor="#ffffff">
I'm running into a number of problems with overlays that are caused
by the fact that the EuclideanSpace is now defined on the Dataset
and not on the Display used to composite datasets and overlays.
There are more than a few issues:<br>
<br>
<ul>
<li>Trac issue ImageJ 558 - the user takes a Z-stack, multichannel
image and thresholds it to get a BinaryMaskOverlay and
associated ROI. The ROI is properly defined in a 4-dimensional
space. However, when displayed, a single X/Y plane should be
sampled and displayed, but there is no mechanism to retrieve the
plane being displayed or the color display mode. The code
iterates through the entire stack and that's what causes it to
update slowly.</li>
</ul>
<ul>
<li>Datasets have axes which capture the metadata of what a
particular dimension means. Overlays don't have that.<br>
</li>
<li>Channels are really categorical, not ordinal - there's no
reason the red channel is zero, the green is one and the blue is
two and, with a channel-stack, the DAPI channel in one image
might be the PI channel in a second. You can properly relate one
channel to the other through metadata, but overlays don't have
that.</li>
<li>You might want to composite datasets - take two images and
place them adjacently. What happens if they have different axes?
What happens if they have different channel layouts?</li>
</ul>
So, what I'm thinking is the following:<br>
<ul>
<li>A display represents the EuclideanSpace that's used to
composite overlays and datasets. Think of the display as a big
N-dimensional container and the overlays and datasets float
inside that container.<br>
</li>
<ul>
<li>The display has a set of axes that are the union of all of
the axes that appear in the view data objects.</li>
<li>There are two behaviors for a view object that does not have
an axis in the space. Perhaps the user selects the behavior?:</li>
<ul>
<li>The value in space outside of a single plane defined by a
coordinate location along the axis is nan or null.</li>
<li>The value in space along the missing axis is the same at
all locations along that axis.</li>
</ul>
</ul>
<li>A display window holds the information about what plane is
being displayed and how the channels are composited.</li>
<ul>
<li>The views are asked to display according to the plane
information. A plane is an interval where the min=max for all
dimensions but X and Y. The views could construct appropriate
projectors based on the interval. The display window should
also control the channel compositing information and the views
should reference the display window's compositing information
instead of maintaining their own copies.<br>
</li>
</ul>
<li>Perhaps there is a SpatialDataObject or perhaps the DataObject
is always spatial. In any case, the axis information and channel
metadata should be pushed up the inheritance hierarchy so that
Overlay has a first-class representation of that and operations
that generate overlays from datasets copy the information to the
overlays they create.</li>
<li>There is a spectral dimension, but the channel axis doesn't
capture that. Channels are categorical and should have names so
they can be matched in the same way that axes are matched.</li>
</ul>
It's some work to refactor these things, but as the code base grows
and becomes more entrenched it will become increasingly difficult to
refactor, so we should do it sooner rather than later. I don't think
this is much coding at this point and if you look at the code, there
are places where this organization clarifies some things.<span><font color="#888888"><br>
<br>
--Lee<br></font></span></div></div></div></blockquote><div><br></div></div></div>