Difference between revisions of "TrackMate Algorithms"

m
(Document spot analyzers)
Line 10: Line 10:
 
This page documents the current components of [[TrackMate]]. TrackMate has a modular design, meaning that it is made of different modules that each have a specific role. Developers can build their own module and re-used the other ones and the GUI to achieve a quick development. The module types are (in the order you meet them when executing the plugin):
 
This page documents the current components of [[TrackMate]]. TrackMate has a modular design, meaning that it is made of different modules that each have a specific role. Developers can build their own module and re-used the other ones and the GUI to achieve a quick development. The module types are (in the order you meet them when executing the plugin):
  
# '''Spot segmenters'''. Taking your image data, they detect spots in them.
+
# '''Spot detectors'''. Taking your image data, they detect spots in them.
# '''Spot feature calculators'''. Each spot can receive a wide range of features, calculated from their location, radius and the image data. For instance: max intensity in spot, rough morphology, etc... They are then used to filter out spurious spots and retain only good ones for the subsequent tracking step.
+
# '''Spot analyzers'''. Each spot can receive a wide range of features, calculated from their location, radius and the image data. For instance: max intensity in spot, rough morphology, etc... They are then used to filter out spurious spots and retain only good ones for the subsequent tracking step.
# '''Diplayers'''. Display the segmentation and tracking results overlaid on your image data.
+
# '''Views'''. Display the segmentation and tracking results overlaid on your image data.
 
# '''Spot trackers'''. Take the filtered spots and link them together to build tracks.
 
# '''Spot trackers'''. Take the filtered spots and link them together to build tracks.
# '''Track feature calculators'''. Like for spot feature calculators, but operate on whole track. Can be used to report track mean velocity, displacement, etc... They are also used to filter spurious tracks.
+
# '''Track analyzers'''. Like for spot analyzers, but operate on whole track. Can be used to report track mean velocity, displacement, etc... They are also used to filter spurious tracks.
 
# '''Actions'''. Miscellaneous actions you can take on the complete result of the tracking process. It can be used to copy the track overlay to another image, launch a 3D viewer, export the results to a simple format, generate a track stack, etc...
 
# '''Actions'''. Miscellaneous actions you can take on the complete result of the tracking process. It can be used to copy the track overlay to another image, launch a 3D viewer, export the results to a simple format, generate a track stack, etc...
  
Line 20: Line 20:
  
  
== Spot segmenters ==
+
== Spot detectors ==
  
 
Behind this barbaric name stand the part responsible for spot detection in your image. The algorithm currently implemented are very generic and naive. They will most likely fail for complicated case such as touching objects, very weak SNR, etc... The 3 of them present are all based on Laplacian of Gaussian filtering, which we describe below.
 
Behind this barbaric name stand the part responsible for spot detection in your image. The algorithm currently implemented are very generic and naive. They will most likely fail for complicated case such as touching objects, very weak SNR, etc... The 3 of them present are all based on Laplacian of Gaussian filtering, which we describe below.
Line 37: Line 37:
  
  
== Spot trackers ==
 
  
=== LAP trackers ===
 
  
  
 +
== Spot analyzers ==
 +
 +
Spot features, such as 'Max intensity', 'Estimated diameter', etc., are calculated for all spots just after the initial filtering step. They are then used to select spots, based on filters set to retain only spots with a given feature below or above a specified threshold.
 +
By the way initial filtering is a good way to limit spot feature calculation time on spots you know to be spurious. With the current features, this not such a reason to do it: most feature analyzer are cheap computationally.
 +
 +
 +
=== Mean, Median, Min, Max, Total intensity and its Standard Deviation ===
 +
 +
The plain statistical estimates are simply calculated from all the values for pixels within the physical radius from the spot center.
 +
 +
 +
=== Contrast & Signal/Noise ratio ===
 +
 +
This contrast followed  [http://en.wikipedia.org/wiki/Michelson_contrast#Formula Michelson contrast] definition: <code>C = (I_in - I_out) / (I_in + I_out)</code> where <code>I_in</code> is the mean intensity inside the spot volume (using the physical radius), and <code>I_out</code> is the mean intensity in a ring ranging from its radius to twice its radius.
 +
 +
The spots's SNR is computed a <code>(I_in - I_out) / std_in</code> where <code>std_in</code> is the standard deviation computed within the spot.
 +
 +
These two values depend greatly on the correctness of the radius of each spot. Negative values might be caused by incorrect radius.
 +
 +
 +
=== Estimated diameter ===
 +
 +
This feature estimates an optimal diameter for each spot, based on contrast calculation.
 +
 +
The mean pixel intensity is calculated in 20 concentric, tangent rings, centered at the target spot location, and with radiuses ranging from a 10th of the spot radius to twice its radius. The contrast at a given radius is defined as the difference between the mean intensity of a ring with inner radius the radius sought, and the previous ring.
 +
 +
The estimated diameter is then defined as the radius that maximizes this contrast. A quadratic interpolation is performed to improve diameter accuracy.
 +
 +
 +
 +
 +
 +
 +
 +
 +
== Spot trackers ==
  
 +
=== LAP trackers ===
  
 
We implemented the renowned Linear Assignment Problem (LAP) method contributed by Jaqaman and colleagues<ref name="Jaqaman">[http://www.nature.com/nmeth/journal/v5/n8/full/nmeth.1237.html Jaqaman et al., "Robust single-particle tracking in live-cell time-lapse sequences", Nat Methods. 2008 Aug;5(8):695-702.]</ref>, with minor modifications on the linking costs calculations. We repeat here the ideas found in the reference paper.
 
We implemented the renowned Linear Assignment Problem (LAP) method contributed by Jaqaman and colleagues<ref name="Jaqaman">[http://www.nature.com/nmeth/journal/v5/n8/full/nmeth.1237.html Jaqaman et al., "Robust single-particle tracking in live-cell time-lapse sequences", Nat Methods. 2008 Aug;5(8):695-702.]</ref>, with minor modifications on the linking costs calculations. We repeat here the ideas found in the reference paper.

Revision as of 14:42, 5 April 2013

!! THIS IS DRAT !!

The documentation on this page is very incomplete !!



____

This page documents the current components of TrackMate. TrackMate has a modular design, meaning that it is made of different modules that each have a specific role. Developers can build their own module and re-used the other ones and the GUI to achieve a quick development. The module types are (in the order you meet them when executing the plugin):

  1. Spot detectors. Taking your image data, they detect spots in them.
  2. Spot analyzers. Each spot can receive a wide range of features, calculated from their location, radius and the image data. For instance: max intensity in spot, rough morphology, etc... They are then used to filter out spurious spots and retain only good ones for the subsequent tracking step.
  3. Views. Display the segmentation and tracking results overlaid on your image data.
  4. Spot trackers. Take the filtered spots and link them together to build tracks.
  5. Track analyzers. Like for spot analyzers, but operate on whole track. Can be used to report track mean velocity, displacement, etc... They are also used to filter spurious tracks.
  6. Actions. Miscellaneous actions you can take on the complete result of the tracking process. It can be used to copy the track overlay to another image, launch a 3D viewer, export the results to a simple format, generate a track stack, etc...

We describe here the best we can the current modules that are shipped with TrackMate.


Spot detectors

Behind this barbaric name stand the part responsible for spot detection in your image. The algorithm currently implemented are very generic and naive. They will most likely fail for complicated case such as touching objects, very weak SNR, etc... The 3 of them present are all based on Laplacian of Gaussian filtering, which we describe below.


Difference of Gaussian particle detection (DoG segmenter)

Given d an approximate expected particle diameter, determined upon inspection, two gaussian filters are produced with standard deviation σ₁ and σ₂:

  • σ₁ = 1 / (1 + √2 ) × d
  • σ₂ = √2 × σ₁

The image is filtered using these two gaussians, and the result of the second filter (largest sigma) is subtracted from the result of the first filter (smallest sigma). This yields a smoothed image with sharp local maximas at particle locations. A detection spot is then created for each of these maximas, and an arbitrary quality feature is assigned to the new spot by taking the smoothed image value at the maximum. If two spots are found to be closer than the expected radius d/2, the one with the lowest quality is discarded.

To improve the localization accuracy, and extra step is taken to yield a sub-pixel localization of the spots. The position of each spot is recalculated using a simple parabolic interpolation scheme, as in [Lowe]. The quality feature is also interpolated using this scheme.

A large number of spurious spots are created by finding local maximas. There spurious spots are discarded inn extra step, by applying a threshold on the quality feature computed during segmentation. The value of this threshold is set manually, to match the SNR of the input image. Thresholded spots are then retained for subsequent particle-linking.



Spot analyzers

Spot features, such as 'Max intensity', 'Estimated diameter', etc., are calculated for all spots just after the initial filtering step. They are then used to select spots, based on filters set to retain only spots with a given feature below or above a specified threshold. By the way initial filtering is a good way to limit spot feature calculation time on spots you know to be spurious. With the current features, this not such a reason to do it: most feature analyzer are cheap computationally.


Mean, Median, Min, Max, Total intensity and its Standard Deviation

The plain statistical estimates are simply calculated from all the values for pixels within the physical radius from the spot center.


Contrast & Signal/Noise ratio

This contrast followed Michelson contrast definition: C = (I_in - I_out) / (I_in + I_out) where I_in is the mean intensity inside the spot volume (using the physical radius), and I_out is the mean intensity in a ring ranging from its radius to twice its radius.

The spots's SNR is computed a (I_in - I_out) / std_in where std_in is the standard deviation computed within the spot.

These two values depend greatly on the correctness of the radius of each spot. Negative values might be caused by incorrect radius.


Estimated diameter

This feature estimates an optimal diameter for each spot, based on contrast calculation.

The mean pixel intensity is calculated in 20 concentric, tangent rings, centered at the target spot location, and with radiuses ranging from a 10th of the spot radius to twice its radius. The contrast at a given radius is defined as the difference between the mean intensity of a ring with inner radius the radius sought, and the previous ring.

The estimated diameter is then defined as the radius that maximizes this contrast. A quadratic interpolation is performed to improve diameter accuracy.





Spot trackers

LAP trackers

We implemented the renowned Linear Assignment Problem (LAP) method contributed by Jaqaman and colleagues[1], with minor modifications on the linking costs calculations. We repeat here the ideas found in the reference paper.

Particle-linking happens in two step: track segments creation from frame-to-frame particle linking, then track segments linking to achieve gap closing. The mathematical formulation used for both steps is linear assignment problem (LAP): a cost matrix is assembled contained all possible assignment costs. Actual assignments are retrieved by solving this matrix for minimal total cost. We describe first how cost matrices are arranged, then how individual costs are calculated.


Cost matrix for frame-to-frame linking

In the first step, two consecutive frames are inspected for linking. Each spot of the first frame is offered to link to any other spot in the next frame, or not to link. This takes the shape of a (n+m) x (n+m) matrix (n is the number of spots in the frame t, m is the number of spots in the frame t+1), that can be divided in 4 quadrants.

  • The top-left quadrant (size n x m) contains the costs for linking a spot i in the frame t to any spot j in the frame 't+1.
  • The top-right quadrant (size n x n) contains the costs for a spot i in the frame t not to create a link with next frame (yielding a segment stop).
  • The bottom-left quadrant (size m x m) contains the costs for a spot j in the frame t+1 not to have any link with previous frame (yielding a segment start).
  • The bottom-right quadrant (size m x n) is the auxiliary block mathematically required by the LAP formalism. A detailed explanation for its existence is given in the supplementary note 3 of [1]. This quadrant is built by taking the transpose of the top-left quadrant, and replacing all non-blocking costs by the minimal cost.


Solving LAP

To solve this LAP, we rely on the Munkres & Kuhn algorithm[2], that solves the problem in polynomial time (O(n3)). The algorithm returns the assignment list that minimizes the sum of their costs.

The frame-to-frame linking described above is repeated first for all frame pairs. This yields a series of non-branching track segments. A track segment may be start or stop because of a missing detection, or because of a merge or split event, which is not taken into account at this stage. A second step where track segments are offered to link between each other (and not only spots) is need, and described further down.


Calculating linking costs

In calculating costs, we deviate slightly from the original paper from Jaqaman et al.[1]. In the paper, costs depend solely on the spot-to-spot distance, possibly weighted by the difference in spot intensity. Here, we offer to the user to tune costs by adding penalties on spot features, as explained below.

The user is asked for a maximal allowed linking distance (entered in physical units), and for a series of spot features, alongside with penalty weights. These parameters are used to tune the cost matrices. For two spots that may link, the linking cost is calculated as follow:

  1. The distance between the two spots D is calculated
  2. If the spots are separated by more than the max allowed distance, the link is forbidden, and the cost is set to infinity (i.e the blocking value). If not,
  3. For each feature in the map, a penalty p is calculated as
     p = 3 \times W \times \frac{ | f_1-f_2|}{f_1+f_2}
    where W is the factor associated to the feature in the map. This expression is such that:
    • there is no penalty if the 2 feature values f1 and f2 are the same;
    • with a factor of 1, the penalty is 1 is one value is the double of the other;
    • the penalty is 2 if one is 5 times the other one.
  4. All penalties are summed, to form P = (1 + ∑ p )
  5. The cost is set to the square of the product: C = ( D × P )²

If the user feeds no penalty, the costs are simply the distances squared.


Calculating non-linking costs

The top-right and bottom-left quadrant of the frame-to-frame linking matrix contain costs associated with track segment termination or initiation (a spot is not linking to a spot in the next or previous frame). Each of these two blocks is a square matrix with blocking value everywhere, except along the diagonal for which an alternative cost is computed. Following Jaqaman[1], this cost is set to be

Calt = 1.05 × max( C )

where C is the costs of the top-left quadrant.

Cost calculation & Brownian motion

Without penalties and with a maximal linking allowed distance, the returned solution is the one that minimizes the sum of squared distances. This actually corresponds to the case where the motion of spots is governed by Brownian motion. See for instance Crocker and Grier[3].

By adding feature penalties, we aim at favoring linking particles that "resemble" each other. In brute single particle linking problems, spots are generally all the same, and they only differ by position. However, there is a variety of problems for which these feature penalties can add robustness to the tracking process.

For instance, we originally developed TrackMate for semi-automated lineaging of C.elegans embryos, using a strain fluorescent in the nucleus. Cells that are dividing have a fluorescence distribution which is very different from non-dividing cells, and this can be exploited for robust tracking.


Track segment linking

In a second step, the track segments built above are offered to link between each other. Jaqaman and colleagues proposes to exploit again the LAP framework for this step. A new cost matrix is generated, but this time the following events are considered:

  • The end of a track segment is offered to link to any other track segment start. This corresponds to gap-closing events, where a link is created typically over two spots separated by a missed detection.
  • The start of a track segment is offered to link to the spots in the central part (not start, not end) of any other track segment. This corresponds to splitting events, where a track branches in two sub-tracks.
  • The end of a track segment is offered to link to the spots in the central part of any other track segment. This corresponds to merging events, where two tracks merges into one. I yet did not meet this case in Life-Sciences.
  • A spot part of any track segment is offered not to create any link.

The second cost matrix has a shape that resembles the first cost matrix, calculated for frame-to-frame linking, and which is best described in the original article.

As before, we modified the way costs are calculated, and re-used the feature penalties framework described above. Also, the user must provide on top a maximal time-difference to link, over which linking will be provided. Careful: this maximal time is expressed in physical units and not in number of frames.




  1. 1.0 1.1 1.2 1.3 Jaqaman et al., "Robust single-particle tracking in live-cell time-lapse sequences", Nat Methods. 2008 Aug;5(8):695-702.
  2. J. Munkres, "Algorithms for the Assignment and Transportation Problems", Journal of the Society for Industrial and Applied Mathematics, 5(1):32–38, 1957 March
  3. Crocker and Grier. "Methods of Digital Video Microscopy for Colloidal Studies." J Colloid Interf Sci (1996) vol. 179 (1) pp. 298-310