Skip to content, Skip to search


TrakEM2 Scripting

18,960 bytes added, 06:30, 1 November 2016
update java3d package names to org.scijava.vecmath
Open the "Plugins - Scripting - Jython Interpreter" (see [[Scripting Help]]) and make sure there is a TrakEM2 project open, with a display open. Then type or paste the examples below.
Or open a new [[Script Editor]] window with "File - New - Script", then paste the example, select the "Language - Python", and push the "Run" button.
= Introduction to scripting TrakEM2 =
See also the complete [ TrakEM2 API documentation]
To run a script, follow isntructions instructions as indicated in the [[Scripting Help]].
=== Get the instance of a selected image ===
<source lang="python">
for d in Display.getFront().getSelected(Patch):
print d.title
The above is a static call that retrieves the list for whichever Display window happens to be activated, in front of all others. If you have a Display instance, perform the same operation via the Display's Selection:
<source lang="python">
front = Display.getFront()
selection = front.getSelection()
for d in selection.get(Patch):
print d.title
=== Find the file path of images that lay under a specific floating text label ===
The idea is to add floating text labels over images (using the Text Tool), and then to search for all the images that are under the X,Y coordinate of each label. Then we print the
<source lang="python">
regularExpression = ".*fold.*"
for layer in Display.getFront().getLayerSet().getLayers():
for label in layer.getDisplayables(DLabel):
if label.getTitle().matches(regularExpression):
tx = label.getAffineTransform().getTranslationX()
ty = label.getAffineTransform().getTranslationY()
patches = layer.find(Patch, tx, ty)
for patch in patches:
print patch.getImageFilePath()
=== Setting and getting member objects in jython ===
If you change the affine transform of a Displayable directly (by calling <i>getAffineTransform()</i> and then manipulating it), keep in mind that you will most likely screw up the internal cached maps for fast location of the Displayable object. To solve that, be sure to call <i>updateBucket()</i> on the affected Displayable object.
=== Import images, montage them, blend them and save as .xml ===
What follows is a small script that imports images from a single folder, sorting out which images go to what layer (section) by matching a regular expression pattern on the file name.
Then the images are montaged layer-wise, and blended together (the borders of the overlapping images are faded out).
Notice that, for this script to work for you, you will have to edit two lines:
1. The source <i><b>folder</b></i> where images are to be found.
2. The <i><b>pattern</b></i> to match, which dictates which image goes to which layer.
Be sure as well to create as many layers as you need. If you don't know, use the <i>getLayer</i> method on the <i>layerset</i> variable, which has the ability to create a new layer when asked to get one for a Z for which a layer doesn't exist yet.
Documentation you may want to look at:
[,%20ini.trakem2.tree.TemplateThing,%20java.lang.String) Project.newFSProject], [,%20java.lang.String) Patch.createPatch], [ Layer.add], [ Align], [ AlignTask],
<source lang="python">
# Albert Cardona 2011-06-05
# Script for Colenso Speer
import os, re
#folder = "/path/to/folder/with/all/images/"
folder = "/home/albert/Desktop/t2/example-data/images/2043_5_6_7"
# 1. Create a TrakEM2 project
project = Project.newFSProject("blank", None, folder)
# OR: get the first open project
# project = Project.getProjects().get(0)
layerset = project.getRootLayerSet()
# 2. Create 10 layers (or as many as you need)
for i in range(10):
layerset.getLayer(i, 1, True)
# ... and update the LayerTree:
# ... and the display slider
# 3. To each layer, add images that have "_zN_" in the name
# where N is the index of the layer
# and also end with ".tif"
filenames = os.listdir(folder)
for i,layer in enumerate(layerset.getLayers()):
# EDIT the following pattern to match the filename of the images
# that must be inserted into section at index i:
pattern = re.compile(".*_z" + str(i) + "_.*\.tif")
for filename in filter(pattern.match, filenames):
filepath = os.path.join(folder, filename)
patch = Patch.createPatch(project, filepath)
# Update internal quadtree of the layer
# 4. Montage each layer independently
from mpicbg.trakem2.align import Align, AlignTask
param = Align.ParamOptimize() # which extends Align.Param
param.sift.maxOctaveSize = 512
# ... above, adjust other parameters as necessary
# See:
# features:
# transformation models:
# sift:
AlignTask.montageLayers(param, layerset.getLayers(), False, False, False, False)
# 5. Resize width and height of the world to fit the montages
# 6. Blend images of each layer
Blending.blendLayerWise(layerset.getLayers(), True, None)
# 7. Save the project
project.saveAs(os.path.join(folder, "montages.xml"), False)
print "Done!"
= Manipulating Displayable objects =
The bucket is the region of the 2D world where the Patch lives. Picture the world as a checkerboard, where a given image, wrapped in a Patch object, belongs to each of the square that it intersects. Failing to update the bucket will result in improper canvas repaints--the Patch cannot be found.
=== Adding areas to an AreaList by scanning pixel values in the slices of a stack ===
The script below is the same as the command "Import - Import labels as arealists".
<source lang="python">
from ini.trakem2 import Project
from ini.trakem2.utils import AreaUtils
from ini.trakem2.display import AreaList
from java.awt import Color
# Obtain an image stack
#imp = IJ.getImage()
imp = WindowManager.getImage("0_5_filtered.tif")
# Obtain the opened TrakEM2 project
p = Project.getProjects()[0]
# Obtain the LayerSet
layerset = p.getRootLayerSet()
# Create a new AreaList, named "synapses"
ali = AreaList(p, "synapses", 0, 0)
# Add the AreaList to the datastructures:
# Obtain the image stack
stack = imp.getImageStack()
# Iterate every slice of the stack
for i in range(1, imp.getNSlices() +1):
ip = stack.getProcessor(i) # 1-based
# Extract all areas (except background) into a map of value vs. java.awt.geom.Area
m = AreaUtils.extractAreas(ip)
# Report progress
print i, ":", len(m)
# Get the Layer instance at the corresponding index
layer = layerset.getLayers().get(i-1) # 0-based
# Add the first Area instance to the AreaList at the proper Layer
ali.addArea(layer.getId(), m.values().iterator().next())
# Change the color of the AreaList
# Ensure bounds are as constrained as possible
=== Extract areas from an arealist and put them as ROIs in ImageJ's ROI Manager ===
<source lang="python">
# Albert Cardona 2012-06-19
# Obtain an arealist and add all its areas as ROIs in the ROI Manager
from ini.trakem2.display import Display, AreaList
from ij.gui import ShapeRoi
from ij.plugin.frame import RoiManager
def getRoiManager():
""" Obtain a valid instance of the ROI Manager.
Notice that it could still be null if its window is closed."""
if RoiManager.getInstance() is None:
return RoiManager.getInstance()
def putAreas(arealist):
""" Take all areas of an AreaList and put them in the ROI Manager."""
for layer in arealist.getLayerRange():
area = arealist.getAreaAt(layer)
if area is not None and not area.isEmpty():
roi = ShapeRoi(area)
def run():
front = Display.getFront()
layers = front.getLayerSet().getLayers()
arealists = front.getSelection().getSelected(AreaList)
if arealists.isEmpty():
IJ.log("No arealists selected!")
# Extract areas as ROIs for the first one:
Notice that python (and jython) lets you use object instance methods as first-class functions, and constructors as well. This enables us to rewrite the "putAreas" function in a functional way, without using any temporary variables and without any if/else logic:
<source lang="python">
def putAreas(arealist):
""" Take all areas of an AreaList and put them in the ROI Manager."""
def put(arealist):
filter(lambda area: not area.isEmpty(),
map(arealist.getAreaAt, arealist.getLayerRange())))))
= Adding images =
A similar measurement may be obtained like the following, if you don't mind typing in the IDs of the Ball (vesicles) and AreaList (the synaptic surface), and getting the results summarized into mean, standard deviation and median (of the distances of each vesicle to the mesh):
<source lang="python">
# The IDs of the Ball and AreaList instances
vesiclesID = 1543
synapticSurfaceID = 1541
# Obtain the two TrakEM2 instances
project = Project.getProjects()[0]
vesicles = project.findById(vesiclesID)
synapticSurface = project.findById(synapticSurfaceID)
# A set of unique vertices defining the synaptic surface
vertices = set(synapticSurface.generateTriangles(1, 2))
# For every vesicle, measure its shortest distance to a vertex
distances = [reduce(min, map(lambda v: p.distance(v), vertices))
for p in vesicles.asWorldPoints()]
# Compute average, median and standard deviation
mean = sum(distances) / len(distances)
stdDev = Math.sqrt(reduce(lambda s, e: s + pow(e - mean, 2),
distances, 0)) / len(distances)
median = sorted(distances)[len(distances)/2]
print mean, stdDev, median
= Interacting with Layers (Sections) =
=== Calibrating and setting the Z dimension ===
Each [ Layer] stores a Z coordinate and a thickness value with <i>double</i> precision. The Z coordinate is in pixels.
How to compute the Z coordinate of a [ Layer]: suppose that the calibration specifies 4x4x50 nm. This means 4 nm/px in the X axis, 4 nm/px in the Y axis, and 50 nm/px in the Z axis. It is assumed that you set this values by right-clicking on the canvas window and choosing "Display - Calibration...", which opens the familiar ImageJ dialog for image calibration.
Then you have to compute the thickness of a section relative to X axis coordinates. To do so:
layer thickness = (Z calibrated thickness) / (X calibrated thickness)
In our example of 4x4x50 nm/px:
layer thickness = 50 / 4 = 12.5
Then we must set this thickness to every section. This consists of the following steps to be done on the <i>Layer Tree</i> This is the tree that lists the layers in the TrakEM2 window):
1. Right-click on the "Top Level [Layer Set]" node of the <i>Layer Tree</i>.
Then choose "Reset layer Z and thickness".
2. Click on the first layer node, then {{key|Shift}}+{{key|click}} on the last layer node.
All nodes will be selected.
3. Right-click on the selected nodes and choose "Scale...".
4. In the dialog, type in "12.5"--the value we computed above.
To accomplish the same programatically, do the following:
<source lang="python">
z = 0
thickness = 12.5
# Obtain the LayerSet instance:
layerset = Display.getFront().getLayerSet()
for layer in layerset.getLayers():
z += thickness
# Update the GUI
= Interacting with Treeline, AreaTree and Connector =
All three types: "treeline", "areatree", and "connector" are expressed by homonimous classes that inherit from the abstract class [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html ini.trakem2.display.Tree].
A [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html Tree] is a [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Displayable.html Displayable] and hence presents properties such as title, alpha, color, locked, visible ... which are accessible with their homonimous set and get methods (e.g. <I>setAlpha(0.8f);</i>, <i>getAlpha();</i> etc.)
The [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html Tree] consists of a root [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Node.html Node] and public methods to access it and modify it.
The root [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html Node] gives access to the rest of the nodes of the [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Tree.html Tree]. From the canvas, a user would push 'r' on a selected Treeline, AreaTree or Connector to bring the field of view to where the root node is. From code, we would call:
<source lang="python">
Now that we have a reference to the root [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html Node], we'll ask it to give us the entire collection of subtree nodes: all nodes in the [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Tree.html Tree]:
<source lang="python">
The [http://pacificfiji.mpi-cbg.desc/javadoc/ini.trakem2/display/Node.NodeCollection.html NodeCollection] is lazy and doesn't do caching. If you are planning on calling size() on it, and then iterating its nodes, you would end up iterating the whole sequence twice. So let's start by duplicating it:
<source lang="python">
Each [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html Node] has:
<li>X, Y coordinates, relative to the local coordinate system of the Tree that contains the [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html Node].</li> <li>A reference to a layer (get it with nd.getLayer()). The [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Layer.html Layer] has a getZ() method to get the Z coordinate (in pixels).</li>
<li>A data field, which can be a radius or a java.awt.geom.Area (see below).</li>
Each [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html Node] contains a [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Node.html#getData() getData()] public method to acquire whatever it is that it has:
<li>Treeline and Connector: its nodes [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html#getData() getData()] return a radius. The default value is zero.</li> <li>AreaTree: its nodes [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Node.html#getData() getData()] return a [http://pacific.mpi-cbgfiji.desc/javadoc/java/awt/geom/Area.html java.awt.geom.Area] instance, or null if none yet assigned to it.</li>
The method we use is [ Ulrik Brande]'s fast algorithm for computing betweenness centrality (see the [ paper]).
The method [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html#computeCentrality() computeCentrality()] of class [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Tree.html Tree] returns as a [http://pacific.mpi-cbgfiji.desc/javadoc/java/util/Map.html Map] of [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Node.html Node] instance vs. its centrality value:
<source lang="python">
== Compute the degree of every node ==
The degree of a node is the number of parent nodes that separate it from the root node. It's a built-in function in [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html Tree] (and also in [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Tree.html Node]):
In the following example, we colorize the tree based on the degree of the node: the closer to the root, the hotest:
== Find branch nodes or end nodes ==
The [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html Tree] class offers methods to obtain the list of all branch points, end points, or both:
<source lang="python">
Similarly, we could compute the incomming connections. There is a convenience method [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Tree.html#findConnectors() findConnectors()] in class [http://pacific.mpi-cbgfiji.desc/javadoc/ini/trakem2/display/Tree.html Tree] to return two lists: that of the outgoing and that of the incomming Connector instances. From these, one can easily get the connectivity graph, which you may also get by right-clicking on a Display and going for "Export - Connectivity graph...".
== How to find out the network of all arbors, related via Connector instances ==
The easiest way is to iterate all connectors and find out which objects they are relating. A [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Connector.html Connector] object has an origin (the root node) and any number of targets (all children nodes of the root node). Each node has a radius; any other object in the TrakEM2 project that intersects with the world coordinates of that radius will be considered associated as an origin or a target.
<source lang="python">
Notice how about we called <i>getOrigins(Tree)</i> and <i>getTargets(Tree)</i>, which filters all potential origins and targets (Patch--an image--, AreaList, etc.) so that only Tree instances will be present in the lists.
'''NOTE''': you may also want to use the "Export - NeuroML" menu command, in the right-click popup menu.
== Measure all spine necks in a neuronal arbor ==
'''UPDATE''': as of version 0.8n, this functionality is included in TrakEM2. Right-click on a selected treeline or areatree and choose "Measure - Shortest distances between all pairs of nodes tagged as..."
The idea is to label the beginning of a spine neck with the tag "neck start" and the end of the spine neck with the tag "neck end". It is assumed that the "next end" will always be in the subtree of the "neck start" node; in other words, that the direction of the tree is from "neck start" to "neck end".
Then, we iterate all nodes of the arbor looking for nodes that have the "neck start" tag and measure the calibrated length of the neck. All measurements for all spine necks are printed out.
<source lang="python">
# 2011-03-13 Albert Cardona for Nuno da Costa
# For a given Treeline or AreaTree that represents a neuronal arbor,
# find all nodes that contain the tag "neck start"
# and for each of those find the distance to a node
# in their subtree that contains the tag "neck end".
# In short, measure the lengths of all spine necks
# labeled as such in the arbor.
from math import sqrt
from ini.trakem2.display import Display, AreaTree, Treeline
def findNeck(startNode):
""" Assumes necks are not branched. """
neck = []
for node in startNode.getSubtreeNodes():
tags = getTagsAsStrings(node)
if tags is None or not "neck end" in tags:
neck.append(node) # growing the neck
# Else, end of neck:
return neck
print "Did not find a node with an end tag, for parent node " + startNode
return None # end tag not found!
def getTagsAsStrings(node):
found = set()
tags = node.getTags()
if tags is None or 0 == len(tags):
return found
for tag in tags:
return found
def measureSpineNecks(neuron):
""" Expects an AreaTree or a Treeline for neuron.
Assumes that nodes with a tag "neck start" are parents or superparents of nodes with tags of "neck end".
print "Measurements for neuron '" + str(neuron) + "':"
for node in neuron.getRoot().getSubtreeNodes():
# Check if the node has the start tag
tags = getTagsAsStrings(node)
if tags is None or not "neck start" in tags:
# Find its child node that has an end tag
neck = findNeck(node)
if neck is None:
distance = neuron.measurePathDistance(neck[0], neck[-1])
print " id:", neuron.getId(), "-- neck length: ", distance
def isTree(x):
return isinstance(x, Treeline) or isinstance(x, AreaTree)
# Measure in all treelines or areatrees:
#trees = filter(isTree, Display.getFront().getLayerSet().getZDisplayables())
# Measure only in the selected treelines or areatrees:
trees = filter(isTree, Display.getSelected())
if 0 == len(trees):
print "No trees found!"
for neuron in trees:
= Interact with a Ball object =
== Set the radius of all balls of all Ball objects in a project ==
<source lang="python">
# Set a specific radius to all individual spheres
# of all Ball objects of a TrakEM2 project.
calibrated_radius = 40 # in microns, nm, whatever
display = Display.getFront()
layerset = display.getLayerSet()
cal = layerset.getCalibration()
# bring radius to pixels
new_radius = calibrated_radius / cal.pixelWidth
for ballOb in layerset.getZDisplayables(Ball):
for i in range(ballOb.getCount()):
ballOb.setRadius(i, new_radius)
ballOb.repaint(True, None)
== Export all Ball objects as a CSV file ==
<source lang="python">
# Open a text window containing all Ball objects as a CSV file,
# in calibrated coordinates.
# The text window has a "File - Save" menu for saving to a file.
# Albert Cardona 2015-07-02 for Jemima Burden at UCL.
# See also the API of the Ball class:
from ini.trakem2.display import Display, Ball
from ij.text import TextWindow
ball_obs = Display.getFront().getLayerSet().getZDisplayables(Ball)
# One entry for each id,x,y,z,r
rows = []
# Iterate every Ball instance, which contains one or more x,y,z,r balls
for ball_ob in ball_obs:
id = ball_ob.getId()
# Iterate every x,y,z,r ball of a Ball instance, calibrated
wbs = ball_ob.getWorldBalls()
for ball_coords in wbs:
# Store every ball as a row with id, x, y, z, r
rows.append(str(id) + "," + ",".join(str(c) for c in ball_coords))
csv = "\n".join(rows)
t = TextWindow("Balls CSV", csv, 400, 400)
= Generate 3D meshes =
In TrakEM2, 3D meshes are generated as a list of [ Point3f] for each object. Then the list is wrapped into any of the subclasses of [ CustomMesh] of the 3D Viewer library, such as a [ CustomTriangleMesh] or a [ CustomLineMesh]. Then these mesh objects are encapsulated into a [ Content] object and added to an instance of the [ Image3DUniverse], which is the main window of the 3D Viewer.
Of course, via scripting many of these steps may be skipped. Below are several examples on how to generate meshes programmatically and save them in [ Wavefront] format.
=== Generate a 3D mesh for an AreaList ===
This script illustrates how to bypass the 3D Viewer to generate meshes from an AreaList and then export the data in Wavefront format. The script exports an AreaList that has been selected in the front Display.
To export all selected objects, loop through the <i>Display.getSelected()</i>.
To export all arealists, loop through <i>Display.getFront().getLayerSet().getZDisplayables(AreaList)</i>.
<source lang="python">
from ini.trakem2.display import Display
from org.scijava.vecmath import Color3f
from customnode import WavefrontExporter, CustomTriangleMesh
from import StringWriter
from ij.text import TextWindow
# Get the selected AreaList
arealist = Display.getSelected()[0]
# Create the triangle mesh with resample of 1 (no resampling)
# CAUTION: may take a long time. Try first with a resampling of at least 10.
resample = 1
triangles = arealist.generateTriangles(1, resample)
# Prepare a 3D Viewer object to provide interpretation
color = Color3f(1.0, 1.0, 0.0)
transparency = 0.0
mesh = CustomTriangleMesh(triangles, color, transparency)
# Write the mesh as Wavefront
name = "arealist-" + str(
m = {name : mesh}
meshData = StringWriter()
materialData = StringWriter()
materialFileName = name + ".mtl", materialFileName, meshData, materialData)
# Show the text of the files in a window
# then you save it with "File - Save"
TextWindow(".obj", meshData.toString(), 400, 400)
TextWindow(materialFileName, materialData.toString(), 400, 400)
=== Generate a 3D mesh for an AreaTree ===
Just like for an AreaList (see above), but extract the triangles with:
<source lang="python">
triangles = areatree.generateMesh(1, resample).verts
The [ AreaTree]'s generateMesh returns a [ MeshData] object with the list of vertices and the list of colors of each vertex. The <i>generateTriangles</i> method of an [ AreaTree] returns a list of [ Point3f] that are ready for creating a [ CustomLineMesh] (in PAIRWISE mode) to represent the skeleton.
= Save the project while running a task =
From the right-click menu, one may choose "Export - Make flat image", which opens a dialog that lets one choose between 8-bit and RGB. These snapshots are created from the mipmaps, which are all 8-bit or RGB images.
On occasions, one wants to create a flattened montage of images in their original bit depth, such as 16-bit or 32-bit. For this purpose, the static function [http://pacificfiji.mpi-cbg.desc/javadoc/ini/trakem2/display/Patch.html#makeFlatImage(int,%20ini.trakem2.display.Layer,%20java.awt.Rectangle,%20double,%20java.util.Collection,%20java.awt.Color,%20boolean) Patch.makeFlatImage] exists.
Here is an example that, for a given Layer and set of selected Patch instances (image tiles) in it, it makes a 16-bit flat montage image and returns it as an ImageJ's ImageProcessor, at 50% the original scale.
For other output types, use ImagePlus.GRAY8, .GRAY16, GRAY32 or .COLOR_RGB, as listed in the documentation for the [http://pacificfiji.mpi-cbg.desc/javadoc/ij/ImagePlus.html ImagePlus] class.
= Enrich the GUI of TrakEM =
def addReconstructToolkit(display):
frame tabs = display.getFrame() split = frame.getContentPane().getComponent(0) left = split.getComponent(0) tabs = left.getComponentgetTabbedPane(2)
# Check that it's not there already
title = "Reconstruct toolbar"
*[[Jython Scripting]] in fiji.
*[ Jython webpage].
*[ Fiji scripting tutorial] in Jython.
== Jython scripts for TrakEM2 ==
All the following are included in Fiji's plugins/Examples/TrakEM2_Example_Scripts/ folder:
*[{{GitHub|repo=fiji.git;a=blob;f|path=plugins/Examples/TrakEM2_Example_Scripts/;hb|label=HEAD Extract stack under AreaList] }} in TrakEM2.*[{{GitHub|repo=fiji.git;a=blob;f|path=plugins/Examples/TrakEM2_Example_Scripts/;hb|label=HEAD Set all transforms to identity] }} for TrakEM2 objects.*[http:{{GitHub|repo=fiji|path=plugins/Examples/pacific.mpi-cbg.deTrakEM2_Example_Scripts/cgi-bin/gitwebT2_Select_All.cgi?ppy|label=fijiSelect All}} objects in TrakEM2.git;a*{{GitHub|repo=blob;ffiji|path=plugins/Examples/TrakEM2_Example_Scripts/;hb|label=HEAD Select All] objects Measure AreaList}} in TrakEM2.*A [ collection of scripts for TrakEM2], hosted by github. Mostly related to inspecting and analyzing [http://pacificfiji.mpi-cbgsc/javadoc/ini/trakem2/display/Treeline.html Treeline], [http://fiji.desc/javadoc/ini/trakem2/cgi-bindisplay/gitwebAreaTree.cgi?p=html AreaTree] and [http://fiji.git;a=blob;f=pluginssc/javadoc/ini/Examplestrakem2/TrakEM2_Example_Scriptsdisplay/;hb=HEAD Measure AreaListhtml Connector] in TrakEM2instances, when used for neural circuit reconstruction.
Bureaucrat, emailconfirmed, incoming, administrator, uploaders