Skip to content, Skip to search


Getting started with TrackMate

474 bytes removed, 09:31, 6 October 2016
Selecting a view
== The test image ==
The test image we will use for this tutorial has now a link in Fiji. You can find it in ''{{bc | File > | Open Samples > | Tracks for TrackMate (807K)''}}, at the bottom of the list.
[[Image:TrackMate FakeTracks.png|right|border|]]
Also, if you look carefully, you will see that there are two splitting events - where a spot seems to divide in two spots in the next frame, one merging event - the opposite, and a gap closing event - where a spot disappear for one frame then reappear a bit further. TrackMate is made to handle these events, and we will see how.
<br style="clear: both" />{{Clear}}
== Starting TrackMate ==
[[Image:TrackMate MainButtons.png|right|border|]]
With this image selected, launch TrackMate from the menu ''{{bc | Plugins > | Tracking > | TrackMate'' }} or from the [[Using the Command Launcher|Command launcher]]. The TrackMate GUI appears next to the image, displaying the starting dialog panel.
But first, just a few words about its look. The user interface is a single frame - that can be resized - divided in a main panel, that displays context-dependent dialogs, and a permanent bottom panel containing the four main buttons depicted on the right.
The '''Next''' button allows to step through the tracking process. It might be disabled depending on the current panel content. For instance, if you do not select a valid image in the first panel, it will be disabled. The '''Previous''' button steps back in the process, without executing actions. For instance, if you go back on the segmentation panel, segmentation will not be re-executed.
The '''Save''' button creates a XML file that contains all of the data you generated at the moment you click it. Since you can save at any time, the resulting file might miss tracks, spots, etc... You can load the saved file using the menu item ''{{bc | Plugins > | Tracking > | Load a TrackMate file''}}. It will restore the session just where you saved it.
Now is a good time to speak strategy when it comes to saving/restoring. You can save at anytime in TrackMate. If you save just before the tracking process, you will be taken there with the data you generated so far upon loading. TrackMate saves a <u>link to the image file</u> (as an absolute file path) but not the image itself. When loading a TrackMate file, it tries first to retrieve and open this file in ImageJ. So it is a good idea to pre-process, crop, edit metadata and massage the target image first in Fiji, then save it as a .tif, then launch TrackMate. Particularly if you deal with a multi-series file, such as Leica .lif files.
The advantage of this approach is that you load in TrackMate, and everything you need will be loaded and displayed. However, if you need to change the target file or if it cannot be retrieved, you will have to open the TrackMate XML file and edit its 4th line.
<br style="clear: both" />{{Clear}}
== The start panel ==
What is critical is also to check the dimensionality of the image. In our case, we have a 2D time-lapse of 50 frames. If metadata are not present or cannot be read, ImageJ tends to assume that stack always are Z-stack on a single time-point.
If the calibration or dimensionality of your data is not right, I recommend changing it in the image metadata itself, using ''{{bc | Image > | Properties'' }} ({{key|Ctrl}}+{{key|Shift}}+{{key|P}}). The press the 'Refresh source' button on the TrackMate start panel to grab changes.
You can also define a sub-region for processing: if you are only interested in finding spots in a defined region of the image, you can use any of the ROI tools of ImageJ to draw a closed area on the image. Once you are happy with it, press the '''Refresh source''' button on the panel to pass it to TrackMate. You should see that the '''X''' '''Y''' start and end values change to reflect the bounding box of the ROI you defined. The ROI needs not to be a square. It can be any closed shape.
Defining a smaller area to analyze can be very beneficial to test and inspect for correct parameters, particularly for the segmentation step. In this tutorial, the image is so small and parse that we need not worrying about it. Press the '''Next''' button to step forward.
<br style="clear: both" />{{Clear}}
== Choosing a detector ==
You are now offered to choose a detection algorithm ("detector") amongst the currently implemented ones.
The choice is actually quite limited. Apart from the '''Manual annotation''', you will find 3 detectors, but they are all based on [http[wikipedia:// detection#The_Laplacian_of_Gaussian |LoG (Laplacian of Gaussian) segmentation]]. They are described in detail elsewhere, but here is what you need to know.
* The '''Log LoG detector''' applies a plain LoG segmentation on the image. All calculations are made in the Fourier space, which makes it optimal for intermediate spot sizes, between ≈5 and ≈20 pixels in diameter.* The '''Dog DoG detector''' uses the [http[wikipedia:// detection#The_difference_of_Gaussians_approach |difference of Gaussians]] approach to approximate a LoG filter by the difference of 2 Gaussians. Calculations are made in the direct space, and it is optimal for small spot sizes, below ≈5 pixels.
* The '''Downsample LoG detector''' uses the LoG detector, but downsizes the image by an integer factor before filtering. This makes it optimal for large spot sizes, above ≈20 pixels in diameter, at the cost of localization precision.
In our case, let us just use the '''Dog detector'''.
<br style="clear: both" />{{Clear}}
== The detector configuration panel ==
The LoG-based detectors fortunately demand very few parameters to tune them. The only really important one is the ''Estimated blob diameter'''. Just enter the approximate size of the spots you are looking to tracks. Careful: you are expected to enter it in <u>physical units</u>. In our dummy example, there is no calibration (<tt>1 pixel = 1 pixel</tt>), so it does not appear here.
There are extra fields that you can configure also. The '''Threshold''' numerical value aims at helping dealing with situation where a gigantic number of spots can be found. Every spot with a maximal intensity quality value below this threshold value will not be retained, which can help saving memory. You set this field manually, or by adjusting the threshold using ImageJ: call and check how it fares with the ''Image > Adjust > Threshold'' menu item (Ctrl + Shit + T), adjust the upper threshold to your liking, the press the '''Refresh thresholdPreview button''' button on the panel. This will grab the value you just set.
You can check '''Use median filter''': this will apply a 3x3 median filter prior to any processing. This can help dealing with images that have a marked salt & pepper noise which generates spurious spots.
We hope that TrackMate will be used in experiments requiring '''Sub-pixel localization''', such as following motor proteins in biophysical experiments, so we added schemes to achieve this. The one currently implemented uses a quadratic fitting scheme (made by Stephan Saalfeld and Stephan Preibisch) based on [ David Lowe SIFT work]<ref>David G. Lowe, "Distinctive image features from scale-invariant keypoints", International Journal of Computer Vision, 60, 2 (2004), pp. 91-110.</ref>. It has the advantage of being very quick, compared to the segmentation time itself.
Finally, there is a '''Preview button''' that allows to quickly check your parameters on the current data. To use it, just draw a ROI in the raw image, and click preview. After some computations, you can check the results overlaid on the image. Most likely, you will see plenty of spurious spots that you will be tempted to discard by adjusting the '''Threshold''' value. This is a very good approach for large problems. Here, we care little for that, just leave the threshold at 0.
In our case, the spots we want to track are about 5 pixels in diameter, so this is what we enter in the corresponding field. We don't need anything else. The '''Sub-pixel localization''' option adds a very little overhead so we can leave it on.
<br style="clear: both" />{{Clear}}
== The detection process ==
<br style="clear: both" />{{Clear}}
== Initial spot filtering ==
In our case, we see from the histogram that we could make sense of this step. There is a big peak at low quality (around a value of 1.2) that contains the majority of spots and is most likely represent spurious spots. So we could put the threshold around let's say 5.5 and probably ending in having only relevant spots. But with less than 10 000 spots, we are very very far from 1 million so we need not to use this trick. Leave the threshold bar close to 0 and proceed to the next step.
<br style="clear: both" />{{Clear}}
Honestly, choose the HyperStack displayer. Unless you have a very specific and complicated case that needs to inspect results in 3D immediately, you do not need the 3D viewer. The HyperStack displayer is simpler, lighter, allow to manually edit spots, and you will be able to launch a 3D viewer at the end of the process and still get the benefits.
When you press the '''Next''' button, two process startsprocesses start:
* the features of all spots (well, those you left after the initial filtering step) are calculated;
* the displayer selected does everything it needs to prepare displaying them.
So nothing much. Let's carry on.
<br style="clear: both" />{{Clear}}
== Spot filtering ==
A filter can be set to be above or below the given threshold. You change this behavior using the radio buttons below the histogram window. In our case, we want it to be above of course. The '''Auto''' button uses [http[wikipedia:// 27s method|Otsu's method]] to determine automatically a threshold. In our case, we will put it manually around 33.
You can inspect the data by scrolling on the hyperstack window and check that only mostly good spots are retained. This is an easy image. The spots you have filtered out are not discarded; they are simply not shown and they will not be taken into account when doing particle linking. In a later stage, you can step back to this step, and retrieve the filtered out spots by removing or changing the filters.
Press '''Next''' when you are ready to build tracks with these spots.
<br style="clear: both" />{{Clear}}
== Selecting a simple tracker ==
* The not simple one allows to detect any kind of event, so if you need to build tracks that are splitting or merging, you must go for this one. If you want to forbid the detection of gap-closing events, you want to use it as well. Also, you can alter the cost calculation to disfavor the linking of spots that have very different feature values.
There is also a 3rd tracker, the [http[wikipedia:// Nearest neighbor search|'''Nearest neighbor search''']] tracker. This is the most simple tracker you can build, and it is mostly here for demonstration purposes. Each spot in one frame is linked to another one in the next frame, disregarding any other spot, thus achieving only a very local optimum. You can set a maximal linking distance to prevent the results to be totally pathological, but this is as far as it goes. It may be of use for very large and easy datasets: its memory consumption is very limited (at maximum only two frames need to be in memory) and is quick (the nearest neighbor search is performed using [http[wikipedia:// Kd tree|Kd-trees]]).
Then of course, there is the option to skip the automated tracking using '''Manual tracking'''.
The first one deals with the <u>frame-to-frame linking</u>. It consists in creating small track segments by linking spots in one frame to the spots in the frame just after, not minding anything else. That is of course not enough to make us happy: there might be some spot missing, failed detection that might have caused your tracks to be broken. But let us focus on this one now.
Linking is made by minimizing a global cost (from one frame to another, yet). A short word on the linking logic: The base cost of linking a particle with another one is simply the squared distance<ref>. There is some theoretical grounds for that, if you are investigating Brownian motion. See the [[TrackMate_algorithms#Cost_calculation_.26_Brownian_motion|page]] that details the segmenters and trackers for information.</ref> . Following the proposal of Jaqaman ''et al.''<ref name="Jaqaman"/>, we also consider the possibility for a particle ''not'' to make any link, if is advantageous for the global cost. The sum of all costs are minimized to find to set of link for this pair of frame, and we move to the next one.
As for the simple tracker, the '''Max distance''' field helps preventing irrelevant linking to occur. Two spots separated by more than this distance will never be considered for linking. This also saves some computation time.
[[User:JeanYvesTinevez{{Person|JeanYvesTinevez]] }} ([[User talk:JeanYvesTinevez|talk]]) 04:18, 1 August 2013 (CDT)
Emailconfirmed, incoming, administrator, uploaders