Automatically segment the boundary of a nucleus or cell starting from an approximate ROI. It supports 2D and 3D processing (requires a 3D starter ROI) and tracking of slowly moving cells. Ideal to study cell morphodynamics.
Institution: Institut Pasteur
This page describes the “Active Contours” plug-in, a segmentation technique able to extract the outline of objects in 2D or 3D images, and also track these outlines over time in a 2D or 3D time-lapse sequence. In a nutshell, an initial contour is drawn (or generated) around an object of interest – note that the initial contour need to be in 3D if we want to do 3D processing – and will then snap on the object’s border automatically. In a tracking scenario, the final object border is taken as the initial position to segment the following frame, and so forth (NB: tracking applications therefore require a good time resolution during acquisition, such that an overlap exist between successive positions of the tracked object)
This documentation starts with a pseudo-exhaustive (partly historical) description of the method, its principle and its numerous implementations, before actually presenting the Icy plug-in in more details, including the choice of implementation and the available parameters.
If you are familiar with the approach, you may skip directly to the “Plug-in description” section, otherwise you are invited to read on (NB: the introduction is definitely more technical than I would hope, but this should be a good starting point to anyone interested in the underlying idea).
If you use this plug-in in your work, please cite the following reference: “Dufour et al., IEEE Transactions on Image Processing, 2011” (Note that this reference describes the 3D version of the algorithm, but the 2D version works exactly the same).
The principle of deformable models is to define an initial curve (2D or 3D, opened or closed) in the vicinity of an object of interest, and let this curve deform until it reaches a steady-state when it fits the boundary of the target. Although this looks like black magic at times, there are sound underlying mathematics involved. The curve and its deformation can be expressed in many different ways, which I describe below.
Curve representations are presented here in (more or less) chronological order (rather than by group) to give a coherent view of the evolution in the field over the last decades. This review is however not exhaustive and only gives the main lines of research for the general reader, directed toward the purpose of the plug-in (cell segmentation and tracking).
- Explicit parametric representation The original scheme (originally developed by Kass & Terzopoulos in 1987) is to represent the curve by a parametric equation. The parameters of this curve are adjusted during the deformation, and the curve is regularly reparameterized to ensure global smoothness. This model is known as the “Snake” model, since the evolution of an opened curve may mimic the undulations of a snake. Such model is fast and computationally efficient, although usually criticized for two main reasons: 1) their lack of topological flexibility (a parametric curve cannot be split), therefore a single curve must be created for each object of interest to detect; 2) the extension to 3D is quite delicate in terms of curve manipulation and reparameterization.
- Implicit level set representation Almost coincidentally, other researchers from the field of fluid dynamics (namely S. Osher and J. Sethian in 1988) have derived an alternative representation of the curve, by defining it as the zero-level of a higher-dimensional Lipschitz function. One metaphor of this representation is a flame (the fluid) burning a hole through a piece of paper. The boundary (2D) of the hole is the zero-level of the flame function (3D), and the deformation of the hole’s boundary is implicitely controlled by the motion of the flame. For this reason this model is usually termed “implicit model”. This model has received extensive attention from the community, since it solved most of the drawbacks of the Snake model: 1) the model is topology-independent (the fluid function may evolve such that the zero-level actually appears as two distinct contours), hence a single level set function may be used to detect multiple non-touching objects in the image; 2) the formalism naturally works in any dimension. Yet, these advantages also come at a significant computational cost (notably in 3D, where the level set function is 4D). Moreover, for the purpose of cell segmentation and tracking, the topological flexibility can become a drawback: indeed, since topology is uncontrolled by nature, objects moving and eventually touching over time will see their zero-level merged together, such that their identity is lost. Many methods have therefore focused on re-inserting topology control in these approaches.
- Explicit discrete representation With the improvement in computer graphics technology and in the Computer Aided Design (CAD) industry, extensive development has been conducted on discrete curve representations. The goal of these representations is to take the best of both worlds with 3 key features: 1) inherit the speed and computational efficiency of the parametric representation; 2) incorporate the topological flexibility of implicit representation; 3) rely on a discrete data structure in order to benefit from the power of graphics processing units for computation and/or rendering purposes. As a result, the computational cost in both time and memory load can be decreased by up to several orders of magnitude as compared to an equivalent level set representation.
- Graph-cut representation Graph-cut representations rely on the theory of minimal path finding in a connected graph, and has only recently been applied in the context of deformable model-based segmentation. This representation is usually compared to the level-set approach, but considers the curve as the minimum cut of a flow that connects all image pixels to either an imaginary sink or target source. The major difference with the level set models lies in the mathematical description of the curve evolution, which is discussed later below.
Once the contour representation is fixed, one must choose a mathematical framework to actually drive the deformation of the curve from its initial position toward the boundary of the object of interest. Here again, several solutions are available, the most popular ones cited below in arbitrary order.
- Dynamic mass-spring systems. Such systems consider the curve to be discretized into a set of physical nodes connected with springs, and that the entire evolution of this physical system follows the Newton laws of motion. By applying attracting and repulsing forces on the various nodes of the system (using the image data to guide them in the correct direction), the curve deforms until it reaches a physical steady-state, assumed to correspond to the solution, although there is no guarantee on its exactness.
- Energy-minimizing frameworks. These frameworks rely on the definition of an energy functional that combines terms related either to the image data (usually referred to as “data attachment” terms), to geometrical properties of the curve (usually termed “regularization” terms), or to prior estimates of the solution when available. The key idea is to define the functional such that the solution of the segmentation problem (i.e. the optimal position of the curve) is a minimizer of this functional. These frameworks are particularly popular thanks to their flexibility, while minimization per se can be conducted using a wide variety of numerical schemes, one of the most popular being the Euler-Lagrange steepest gradient descent, which guarantees convergence to at least a local minimum of the functional.
- Statistical frameworks. These frameworks are very similar to the energy-minimizing ones, however the the optimal solution is defined by maximizing the probability of a given gain function, which is formed of terms analogous to those of the previous case. The advantage here is that the incorporation of shape priors is facilitated by the statistical formulation of the problem. The notion of convergence to a local minimum is complemented by a statistical measure of the correctness of fit of the original terms.
- Energy-minimizing graph-cuts. The use of graph-cuts for deformable models has mostly been motivated by the high computational cost and convergence time of energy-minimizing (ot statistical) frameworks, based on the observation that steepest gradient descent approaches may take a very high number of iterations to converge, yielding substantial computational costs, notably in the case of level set approaches. The idea here is to minimize an energy functional defined on a graph constructed from the image grid, such that each image pixel (or voxel in 3D) is connected to its neighbors by an edge with a cost, while one set of nodes are connected to an additional imaginary “source” node, and the remaining nodes to a “target” node. This graph is then cut with minimal cost using a principle of maximal flow algortithm. The advantage is a substantial gain in convergence time (the number of iterations is up to several order of magnitude lower than in Euler-Lagrange minimization), at the cost of a more complex incorporation of energy terms, which need to be expressed in terms of graph edge costs.
This plug-in implements Multiple Coupled Active Contours in 2D and 3D, according to an energy-minimizing framework with a discrete explicit representation of the contours (polygons in 2D, and triangular meshes in 3D, both with self-parametrisation and topology control). If you have no idea what this last sentence means, you probably have skipped the explanation above (and if you couldn’t understand the description above, the take-home message is: it works in 2D & 3D, tracks objects over time, handles multiple contours simultaneously, detects divisions, and is pretty damn fast!)
The contour evolution exploits multiple cues to achieve optimal segmentation (both based on the image and on semantic information). Just like you would perfect a cooking receipe by adjusting the nature and quantity of ingredients, the best segmentation is achieved by a careful adjustment of the various components, which are detailed below (and correspond to the plug-in parameters):
Major sources of information:
- Edge-based information: common fluorescent markers used in microscopy are specific to membrane structures (actually, phase contrast microscopy behaves the same way, though without specific markers). The contour may exploit this information to “look” for bright (or dark) edges of a structure, and stop when these structures are reached. The caveat here is that the contour must be already close to its target in order to “snap” to the border correctly (otherwise the contour won’t “see” it).
- Region-based information: when using a “full” fluorescent stain of the structures of interest (cytoplasm, nucleus, or both), region-based information is a very powerful asset for segmentation and tracking. This plug-in exploits the well-known Mumford-Shah functional to estimate the optimal frontier that segments the regions of interest by maximising the difference between their average intensity and that of the background. Here this also works with multiple regions that do not share the same average intensity (convenient if the staining is not constant from cell to cell). The advantage here is that the contour does not need to be very close to the target to segment it (although being close makes the whole process faster!), and does not need to see “sharp” edges to find the boundary
- Contour smoothness: as the image data may be quite noisy (especially in low-light condition), a gemoetrical constraint is usually imposed to maintain a certain level of smoothness to the deforming contour. NB: pushed to the extreme, this smoothness constraint will cause the contour to progressively shrink and eventually disappear, so use wisely! (hint: use less smoothness on clean data, more on noisy data).
- Multi-contour coupling: each region of interest being segmented (and tracked) by its own contour offers a considerable advantage when dealing with objects in contact. Indeed, this plug-in will exploit semantic information and prevent contours from overlapping or fusing when they come into contact, allowing to segregate touching objects, even during prolonged contacts over time. Caveat: extensive prolonged contacts (twisting, swirling) may not be handled properly, as the actual boundary between objects is not necessarily clear, even for the human eye!
Other sources of information (less fundamental, but still useful):
- Inflation (a.k.a. “balloon force”): should you opt for edge information, and if your contour is too far away from the target, an arbitrary inflation force can be applied to the contour such that it will permanently shrink or inflate with a certain speed. This should be used minimally and with care, as this should only help the contour find real edges, and not prevent it from staying there.
- Volume(Area) conservation: in some cases (particulary in contact situations, as desribed above), the separation between touching objects is challenging. This plug-in has the option to enforce that each tracked object preserves a “roughly” constant volume over time (flucuations are allowed though), in order to improve tracking (this assumption is highly debatable from a biological perspective, and definitely wrong in 2D, but might solve tricky situations in 3D). This parameter is weighted between 0 and 1. A value of 0 means no control over the volume of the contours. A value of 1 means that between two timepoints of the sequence the contours should keep the same volume. Note: This parameter is highly sensitive and should only be used when contours dissappear or get reduced due to contour contact. It is disabled by default
- Axis constraint: to further assist in separating objects in contact, this plug-in may impose an additional constraint, imposing that contours preferentially deform along their major axis (if there is one).
Finally, the evolution of the contour itself (i.e. the minimisation of the underlying mathematical functional) also has its set of parameters which can be adjusted. These have more to do with the total execution speed of the segmentation / tracking process, here again with pros & cons:
- Evolution time-step: although the contour may seem to move magicly and smoothly, it actually is advancing by very small jumps in the image. The size of this jump can be adjusted to move faster to the solution, with the immediate caveat that moving too fast may cause instabilities to the contour (it may oscillate around the optimal solution, and sometimes have issues when dealing with other touching objects)
- Contour sampling: contours used in this plug-in are represented as a set of geometrical primitives (segments in 2D, triangles in 3D) connecting so-called “control points”. The average separation between these control points can be adjusted, noting that a smaller distance results in more precise (but slower) computations, while a larger distance results in faster computation, with the risk of losing structures smaller than this sampling step.
- Convergence criterion: once the contours have found their target, in most cases they will (or rather, seem to) naturally stop moving. Mathematically speaking, this means that a steady-state solution to the problem has been found, but this state is only steady “in theory”. In practice, there are always minor oscillations around this solution, and they are detected by measuring the overall movement of each contour, stopping them if they fall under a reasonably low value.
1) Open an image or video sequence
2) Draw one (or more) region(s) of interest around or across the object(s) of interest. Draw a 3D ROI for 3D processing. You may also generate one or more ROI automatically using other plug-ins such as the “Thresholder” or the “HK-Means”, which is a good step towards full automation of the workflow (and is the preferred way of initialising the method in 3D, until a good 3D ROI editor is implemented in Icy).
3) Adjust the various parameters of the method (see the explanation above, and the example below)
4) Select the desired output (none, labeled sequence, copy of original sequence with detected objects as ROI)
5) For tracking: first leave the tracking option unchecked; adjust the parameters on the first image; only then check the “tracking” option to process the entire sequence.
6) To access advanced parameters, check the corresponding box and the interface will update automatically.
To run this example, you may download the following video sequence of a moving cell via this link (you may want to right-click and save it on your disk).
Open the active contours plug-in, follow the instructions above, and refer to the screenshot below for the parameters. If all goes well, you should obtain the exact same results as shown below.
Note that a small tooltip message will appear under the mouse as you hover each parameter, indicating how to adjust the parameters for each scenario.