Image Analyst MKIIImage Processing Principles
HomeWorkflowImage Processing PrinciplesFunctions GlossaryPipelines GlossaryProtocolsQuick How ToSearch

website security

Basic image processing tools

In place and not in place operations. An image processing function can result in a new image an unaltered source image, this is called not in place operation. Conversely, in place operations replace the source image with the result. To spare with memory usage, most of the functions of the Image Analyst MKII are in place operations.

 

Image arithmetic. Adding, subtracting, multiplying or dividing image sequences result an image sequence of the same x,y size and number of frames, where the gray value of each pixel equals to the result of the calculation performed on the pixels in the same (x,y,t) positions in the original image sequences. Arithmetic can be performed between images and numbers, in this case the operation is performed independently for each pixels of the image. In additions to the four basic operators, Image Analyst MKII provides power function (including square and square root), exponential and logarithm calculations.

 

Masking. Mathematical operations in images do not always result real values (e.g. division by zero or square root of negative values). Therefore in Image Analyst MKII a pixel can take a value of MASK, which means that the given pixel has no value. Masked pixels appear in black, and the value ‘MASK’ is indicated. Masking is also useful to exclude unwanted information from further processing including scaling of images or ROI mean calculations. In the procedures described below we often use binarized images (see below) to mask other, gray value images. Because division by zero simply results MASK values, masking is performed by multiplication followed by division of an image by the binarized image. Masks can be also generated by the threshold function to exclude saturated or background regions of the image.

 

Spatial filtering with kernel convolution (linear filtering). Spatial filtering is a linear transformation which acts in x,y dimensions of each frame of an image sequence. Kernel convolution calculates a weighted sum in the neighborhood of each pixel. This neighborhood is also called window. This sum is calculated individually for each pixel by placing the window over the pixel and the result and placed into the resultant image. Depending on the actual weights the typical tasks performed by spatial filtering are smoothing, sharpening or differentiation. While for sharpening we suggest the use of the more versatile and biologically meaningful spatial filtering in Fourier domain, kernel convolution is a fast and simple way of smoothing and spatial differentiation 1.

 

Spatial filtering in Fourier domain. Gray scale images are considered as being composed by superposition of two-dimensional (2D) sine waves (see details). To access the frequency information encoded in gray scale images discrete Fourier transformation is used. This frequency information is manipulated by multiplication by a filter function in the Fourier domain. Following inverse discrete Fourier transformation the resultant image is enriched or depleted in certain spatial frequencies. Spatial frequencies hold the biological information that the fluorescence intensity carried by certain frequencies originate from certain sized objects. Additionally, the distribution of these frequencies (e.g. ratio of two specific frequencies) provides information about the size of the objects.

 

Nonlinear filtering. Similarly to the kernel convolution, the neighborhood of each pixel is considered. In this case the resultant pixel value is the minimum, maximum, or median intensity in the neighborhood. The width or window of a nonlinear filter means that for each pixel a width ´ width quadrangular neighborhood is taken where the pixel in the middle. A border of width/2 is usually discarded from the image.

 

Projection. To compose a single image from a z-stack of fluorescence images maximum or mean intensity projections are used. Maximum intensity projection selects the brightest intensity for each (x,y) pixel in the consecutive frames of the z-stack, and provides a natural and less hazy look of the (unfiltered) projection image. Analogously minimum intensity projection is used to flatten z-stacks of transmitted light images (where dark is the information). Maximum intensity projection is a nonlinear filter, therefore combination of maximum intensity projection with other image processing functions may yield different results depending on the order of performing these procedures.

 

Binarization. The process of conversion of a gray scale image into black or white, so called binary image is called binarization. The simplest way of binarization is thresholding; setting pixels to white (or 1) if the gray value is equal or greater than the threshold or setting to black (0) if smaller.

 

Adaptive thresholding. To perform unbiased quantification using binarization and segmentation and also to speed up image analysis, the level of threshold needs to be automatically determined. A well-established way of automatic threshold determination is Otsu’s method2. Using Otsu’s method or given percentile of the intensity histogram binarizes images at one specific intensity value. In these cases very careful background subtraction is required before binarization, otherwise uneven background will distort the shape of the objects, or render proper binarization impossible. Locally adaptive thresholding can recognize details, e.g. bright spots, shapes over varying background. To this end Image Analyst MKII uses ‘morphological reconstruction’ to detect local maxima in fluorescence images. In practice, while thresholding at a given value detects objects with intensities at a certain level above the background, local maxima detects objects are brighter than a certain level below the local maximum. Image Analyst MKII uses a combination of local maximum search and flood filling for locally adaptive thresholding.

 

Morphological operators. Morphological operators are primarily used for manipulating binarized images, e.g. removing noise. Morphological operators act analogously to kernel convolution, however on a binary level. The kernel (of ones and zeros) here is called structuring element. Each pixel in the image is affected based on the neighboring pixels under the structuring element placed on the given pixel. Therefore Erode and Dilate shrinks and grows objects (of ones) by the radius of the structuring element. Open and Close splits objects (enlarges holes) and fills holes, respectively, that are smaller than the structuring element. The white top-hat transform yields those objects that are smaller than the structuring element.

 

Skeletonization. Skeletonization thins shapes in binary images until their width is one pixel. Originally spherical object will be represented by a single pixel.

 

Segmentation and morphological measurements. Segmentation identifies individual objects, and provides information on size and morphological or shape parameters of each object. In a basic case binarized images are used for segmentation; shapes in the image that are contiguous ones (over a background of zeros) and are not touching each other are defined as objects. Advanced image segmentation is performed by considering gray value (intensity) information in separation of objects, e.g. watershed algorithm.

The objects resulted by the segmentation are described by a set of parameters, mostly derived from the area (A) and perimeter (P). Object Classifiers are used to constrain the analysis to a subset of detected objects, e.g. not counting too small or too large objects.

 

 

References

 

   1.   Gerencser, A. A.; Adam-Vizi, V. Biophys.J. 2005, 88, 698-714.

   2.   Otsu, N. IEEE Transactions Systems, Man, and Cybernetics 1979, 9, 62-66.