Having produced the optimal photomicrograph, computer software packages allow a range of further image manipulation. The superimposition of, for example, a fluorescence image and a DIC illuminated view of the same specimen, which might normally be achieved by a photographic double exposure, can be routinely composed within software programs, such as Adobe Photoshop (Adobe Systems Inc., Mountain View, CA). Further manipulation can involve spatial filtering to reduce noise or sharpen contrast, and perhaps most frequently, the pseudocoloring of black-and-white digital images. These techniques also raise the possibility of fairly sophisticated "touching up" of data.
Even with a picture that has been optimized for contrast and illumination, it is sometimes desirable to sharpen local contrast gradients or smooth an image to remove random noise. A common filtering strategy, called a kernel operation, is to compare the intensity value of a pixel with that of its immediate neighbors, and use any differences to increase or reduce the intensity of this central pixel by a computed factor. The degree of filtering depends both on the algorithm used by the kernel operation and the area over which pixel values are sampled.
1. Sharpening filters (see Fig. 4) produce a crisper image by accentuating differences in intensity between a given pixel and its neighbors. For example, a
Laplacian filter equalizes the intensity over areas of low contrast and accentuates the intensity changes where gradients in pixel intensity are sharp. This produces an image composed largely of edge information (effectively a first-order derivative of the original), which when subtracted from the original pixel values, produces an image in which boundaries and the contrast of fine structures are enhanced.
2. Noise reduction filters remove the random pixel values that may be generated when, for example, a digital camera is working at maximum gain. Such noise is usually reduced during digital image collection by averaging successive images of the same field. If such real-time filtering is not practicable, or still leaves a noisy image, random intensity fluctuations can be filtered post hoc. Noise reduction filters take a block of pixels of a predetermined size, and replace the intensity value of the central pixel with a value based on the average or median pixel intensity within the block. This has the effect of eliminating rogue, high or low single pixel values. Algorithms based on averages may result in an unacceptable loss of contrast or even the appearance of pseudoresolution artifacts (3). However, such artifacts can be avoided using filters that "rank" rather than average pixel values, such as the "median" filter included in many software packages.
Pseudocolor is used to accentuate the information in black-and-white images by translating differences in intensity into differences in color. Three examples of its use in confocal microscopy are given in Fig. 4. In the first, a confocal image of cells within a chick hindbrain stained with the fluorescent dye DiI is first sharpened and then intensity values converted into color differences. The color conversion is achieved by differentially adjusting the relationship between the original intensity of the black-and-white image (input) and the intensity of red, green, and blue brightness (output) for a given input value. This relationship is shown on the graph to the left where "input" is on the x-axis and "output" is on the y-axis. This kind of plot is known as a color or output look up table (LUT) and can usually be user-defined within a given software package.
In the second example, an aggregate of heterogeneously labeled chick hind-brain cells was scanned with two different excitation wavelengths (488 nm for green/yellow and 568 nm for red/yellow fluorescence). This reveals two populations of cells. The black-and-white images have been pseudocolored and then recombined. In the 488 nm image, red and blue were completely removed from the image leaving a green intensity spectrum. In the 568 nm image, all green and blue were removed from the image, leaving red. The combined green and red fluorescence images were then superimposed onto a DIC filtered view of the same preparation.
In the third example, pseudocolor coding is used to identify populations of similarly labeled cells at different depths through a preparation. A cluster of cells, all labeled with DiI (which emits a red/orange fluorescence), were scanned at two different depths through the thickness of a chick hindbrain using a confocal microscope. The two black-and-white optical sections were then color-coded in red and green according to depth and recombined. This allows the relative dispersal of labeled cells at different layers of the developing brain to be contrasted. This kind of image could not be produced by conventional double-exposure film photomicrography.
Information in a digitized image can be altered, almost seamlessly, to produce "cleaner" results. A digital paintbox can remove dust and scratches from a scanned 35-mm transparency, compensate for uneven illumination, or produce a uniform background color. Although this may appear similar to the traditional "dodging and burning" used during black-and-white printing, the ease and extent to which a digital image can be altered are considerably greater. Similarly, since in many laboratories images are routinely generated on computer for publication, the opportunities for excessive image manipulation have substantially increased. Ultimately, in such an environment, producing reliable data is a matter of personal responsibility. Some self-evident principles are:
1. Use sharpening, edge detection, or noise reduction filters globally across the image.
2. Preserve the data intact. Never digitally paint over areas that show data in an attempt to "clean up," for example, staining patterns. Never move or "clone" areas of the image where data are represented.
Was this article helpful?