Getting started with TrackMate

Revision as of 06:56, 14 January 2012 by JeanYvesTinevez (talk | contribs) (WIP)


This tutorial is the starting point for TrackMate users. It explains how it works by walking you through a simple case, using an easy image.

The TrackMate plugin provides a way to semi-automatically segment spots or roughly spherical objects from a 2D or 3D image, and track them over time. It follows the classical scheme, where the segmentation step and the particle-linking steps are separated. Therefore each step is handled in the user interface by a specific panel, and you will go back in forth through them. Also, TrackMate has a fishing net with small holes: it will find as much spots as it can, even the ones you are not interested. So there is a step to filter them out before tracking. In these views, TrackMate resembles a bit to the Spot Segmentation Wizard of Imaris™.

The test image

Grab the image we will use for this tutorial here, and open it in Fiji.

TrackMate FakeTracks.png

This is 128x128 stack of 50 frames, uncalibrated. It is noisy, but is still a very easy use case: there is at most 4 spots per frame, they are well separated, they are about the same size and the background is uniform. It is such an ideal case that you would not need TrackMate to deal with it. But for this first tutorial, it will help us getting through TrackMate without being bothered by difficulties.

Also, if you look carefully, you will see that there are two splitting events - where a spot seems to divide in two spots in the next frame, one merging event - the opposite, and a gap closing event - where a spot disappear for one frame then reappear a bit further. TrackMate is made to handle these events, and we will see how.

Starting TrackMate

With this image selected, launch TrackMate from the menu Plugins > Tracking > Track Mate or from the Command launcher. The Track Mate GUI appears next to the image, displaying the starting dialog panel.

This first panel allows you to check the spatial and temporal calibration of your data. It is very important to get it right, since everything afterwards will be based on physical units and not in pixel units (for instance μm and minutes, and not pixels and frames). In our case, that does not matter actually, since our test image has a dummy calibration (1 pixel = 1 pixel).

What is critical is also to check the dimensionality of the image. In our case, we have a 2D time-lapse of 50 frames. If metadata are not present or cannot be read, ImageJ tends to assume that stack always are Z-stack on a single time-point.

If the calibration or dimensionality of your data is not right, I recommend changing it in the image metadata itself, using Image > Properties (Ctrl+Shift+P). The press the 'Refresh' button on the TrackMate start panel to grab changes.

The segmentation step identifies where the objects are in the time-lapse image. All objects in all frames are identified in this first step (segmentation and tracking are not performed simultaneously). The method used here is the Laplacian of the Gaussian (LoG), followed by identification of regional maxima, or connected components of pixels with intensity strictly higher than all bordering pixels.</p>

The result is a candidate list of objects identified within the image. Since this approach to segmentation usually results in many false positives, a thresholding step is incorporated to separate true positives from the false positives. The objects can be thresholded on many features (the LoG value at the center of the object, the average brightness of the object within the expected volume, etc), and an automatic thresholding can be performed for any feature based on histogram thresholding. However, the user retains ultimate control and can threshold objects based on all features in any combination.

Following thresholding, the candidate list of objects is refined to a finalized list of objects, which is used as the input for the tracking step.


  • Tracking: The tracking algorithm is an implementation of the algorithm described by Jaqaman, K. et al. in the paper 'Robust single-particle tracking in live-cell time-lapse sequences' (Nature Methods, 2008). In short, this algorithm turns tracking into two separate steps, each of which is solved as a Linear Assignment Problem (LAP).

    The two steps in this tracking method are:

    1. Links are made between objects in consecutive frames to create track segments (one-to-one)
    2. Links are made between track segments (merge, split, gap closure).

    The tracking algorithm allows for the following, biologically defined events:

    • merging
    • splitting
    • disappearance
    • appearance

  • </ol>


    The only required setting is the estimated diameter of the objects being tracked, in physical units.

    The user can also choose to modify the calibration settings of the image, and crop the image in any dimension (x, y, z, as well as t).



    Before starting the plugin, be sure to have the 2D or 3D image (over time, if tracking) selected. This image should contain roughly "spherical" blobs (for example: nuclei, bacteria, particles) that are to be segmented and tracked. Then, open the TrackMate plugin.

    The only required parameter for the plugin is the estimated diameter of the object in physical units. Once this has been entered, adjust calibration settings as needed (if the image is properly calibrated, you do not have to change the calibration settings). You may also crop the image in the x, y, z, and t dimensions. You may want to do this if the image contains objects you do not wish to track, large regions of noise, or if you do not want to track over all frames.

    Once all of the parameters have been set, select 'Next.' This initializes TrackMate's segmentation step, which is then executed.


    The segmentation step identifies all potential objects in the image (over all frames). The actual segmentation is performed automatically, and requires no input from the user.

    Thresholding Segmentation Results

    You can now threshold the identified objects based on a number of features, the full list of which can be seen in the drop-down menu. The features can be used in combination with one another to refine the results. (Currently, only the logical operator 'and' is supported.)

    Select which features you would like to threshold the objects with (LoG is typically a good feature to threshold), and use the histograms to help determine good threshold values. Features can be added or removed using the '+' and '-' buttons, respectively. You may also choose whether to threshold objects above or below the value you pick.

    For many features, the 'noisy' objects (false positives) will be clumped together to the right or left end of the histogram in a pseudo-normal distribution. Typically, you will want to choose threshold values which exclude these regions of the histogram (we want only those few objects which meet our criteria).

    Note that each feature contains an 'auto' button, which will perform automatic thresholding on that feature (Otsu threshold).

    As a concrete example, suppose we want to select all objects based on the LoG value and the mean intensity within their estimated volume. To accomplish this, first select 'LoG Value' from the drop-down, and either press 'auto' or select the value manually using the histogram. Then select the '+' button to add another feature to threshold, and choose 'Mean Intensity' from the drop-down. Again, select the value to threshold on with either the 'auto' button or by choosing the value manually on the histogram.

    Once you are finished thresholding, select 'Next' to track the thresholded objects over time.


    Once the segmented objects have been thresholded, click 'Next' to track the thresholded objects over time.