Loading...
 

Deconvolution questions


For the ICTM method: none! However, this is only true when the a-priory knowledge that the object is non-negative is absolutely true, as is the case for fluorescence emission.

Yes. Huygens Essential treats the image as the only known plane of a 3D stack and proceeds as usual. Set the z-sampling distance to the Nyquist rate as explained in 'sampling densities' (Huygens User Guide).

The deconvolution process enhances the fine structure of the cells, but unfortunately if the image contains artefacts (because of unstable laser power etc.) they will also become more apparent in the deconvolved image. Scanner instability is mainly a not-reproducible phenomenon, thus deconvolution is not the solution. In some cases the instability IS reproducible like slow thermal drift. If this instability is also present in the PSF measurement the deconvolution can reconstruct the image.

The absolute value of the final Quality factor much depends on the data, the microscope type, and the background. It is a global value computed over the entire image, so the contribution of a local resolution increase can be small.

For example, suppose you have a large featureless image with one tiny object. While the tiny object may be restored very well, the change in the featureless part is negligible. The quality factor will therefore hardly change, though the restoration is successful.

Usually, widefield images show a much higher quality increase than for example confocal images.

TIRF (Total Internal Reflection Fluorescence) deals mostly with 2D images. Huygens can handle 2D images by internally treating them as part of a 3D stack from which most planes happen to be missing.

Strictly speaking Huygens does not generate TIRF Theoretical PSFs, but customers report good results with high NA confocal PSFs. Proceed by setting the Microscope type of your image to confocal and NA ~ 1.4.

An experimental PSF, if existing, can naturally be used. See also the wiki article Total Internal Reflection.

Yes, Huygens will certainly improve the image.

The first test should be done using a Wide Field Microscope Theoretical Psf with the actual parameters used for the acquisition, maybe with a lower Numerical Aperture to enlarge the PSF a little bit.

Still, remember that deconvolution assumes that the Image Formation is linear, and transmission is not, due to possible interference effects. These effects are lesser for thicker samples, but they can create restoration artifacts if they are noticeable. These can be balanced by properly tuning the Signal To Noise Ratio and the Max Num Of Iterations.

The ideal particle to distill an Experimental Psf with the Psf Distiller is probably a sub-resolution gold particle. Note that all the images should be inverted (negative) in Huygens before distilling a PSF and deconvolution: high intensities (gray values closer to white) should describe high object density, but raw transmission images provide the opposite. In Huygens Essential, this can be done with Tools > Invert image.

The Ideal Sampling constraints are about half as stringent as for the confocal imaging mode. For an objective with high N.A. (1.4) the voxel sizes should be in the range of 100-150 nm laterally and 350-500 nm axially.

This strongly depends on the refractive indexes (lens immersion medium and specimen embedding medium). If there is no refractive index mismatch then restoration up to around 200 microns can be expected.

The restoration improves the resolution in all directions, but more so in the z-direction. In typical confocal images it is easy to increase the z-resolution by a factor 2. With a measured PSF a factor 4 is attainable.

However, the z-resolution in the measured image is often 4x worse than the lateral resolution. So at best you can compensate for that, but due to the lateral resolution gain the result will still be non-spherical. The gain in lateral resolution can be spoiled by applying a Gaussian filter (Operation window -> Restoration -> Gaussian filter/Quick Gaussian filter) to decrease lateral resolution again in a controlled manner. Because, in our experience, practically no one wishes to reduce resolution the restoration tools do not do that automatically.

Still, if the sphere is physically small enough (of the order of a voxel) the restoration can reduce it to close to a single voxel. Provided that the z-sampling is small enough. This situation occurs easily in the deconvolution of widefield images with 100nm x 100nm x 100nm voxel size.

For fluorescence and from a theoretical viewpoint MLE restoration is the only proper choice. However, for high SNR's as in widefield images this becomes rather academic. Since the ICTM method in Huygens Pro is computationally more efficient than the MLE implementation we suggest using ICTM for WF images. As to 'quality', an examination based on subjective criteria such as visual inspection might lead to ICTM winning in low noise cases; an examination based on mathematically sound criteria has shown MLE to be the best choice. For noisy images (most confocal ones) the MLE algorithm is not only scientifically more correct, it also produces visually more pleasing images: far less background noise artifacts than ICTM.

The computation time depends on a large number of factors:
  • Microscope type: WF microscopes require more iterations than confocal or 2-photon microscopes.
  • Object type: sparse objects can be restored more effectively than dense objects. The more gain is possible, the more iterations are needed, even if the iterations themselves become also more effective ('bigger steps').
  • Noise: low noise makes a large gain possible: more iterations are needed.
  • Algorithm: Our ICTM (Iterative Constrained Tikhonov-Miller) iterations take per iteration less time than our MLE (Max Likelihood Estimation) algorithm.
  • Hardware: faster and more processors speed up things, insufficient memory is problematic: processing speed then depends on disk I/O performance.
    • Example: Restoring a moderately noisy confocal 256x256x64 image, including starting the software, loading the image, generation of a PSF and a MLE run take altogether 4:23 minutes on an SGI Octane with 2xR10000@225MHz

The deconvolution itself is done frame by frame, but the pre and post processing need the whole time sequence. The preprocessing consists of bleaching correction and background estimation. In Huygens Pro there is also the possibility to apply a time or full 4D prefilter. In the postprocessing the Z-drift is corrected over time (see Zdrift Correction). Most of these operations work on multiple adjacent frames, so working on a file by files bases would limit the current and future processing possibilities.

This does not imply that the 4D stack image needs to be loaded in RAM memory. Provided the swap space is large enough most of it can be swapped out. In fact, paging out means that the processed data is written in raw form back to disk. This will slow down the preprocessing operations, but the deconvolution speed would hardly be impeded, paging being only necessary between frames.

A second possibility is to use a script to process the frames individually. Because the CMLE and QMLE deconvolution methods allow you to specify a background level as a percentage of the estimated background that would handle varying backgrounds also nicely.

There can be two major reasons for this:
  • There is a problem with the measured PSF:
    • bead images were saturated or undersampled.
    • beads have formed aggregates.
    • beads were moving while being imaged.
    • insufficient signal from beads causing inaccuracies in the averaging procedure.
    • strongly varying background.
  • The conditions under which the PSF was measured do not match the imaging conditions.
    • The most important parameter is the medium refractive index. To exclude magnification calibration problems, best record the bead images at the same sampling density as the specimen. To check whether there is a matching problem between the measured PSF and the specimen data, deconvolve the specimen with a theoretical PSF. If the result is better then there can be a matching problem.
As a preliminary check on the PSF quality, proceed to deconvolve one or more of the bead images with it. This should result in a strong gain in resolution.

When you deconvolve a 2D-time image as a 3D image, you force the software to assume that the relationships between the 2D slices, if any, are due to the axial imaging properties of the microscope, but this is obviously not the case! The results will be therefore incorrect.

Instead, when you correctly process the 2D-Time Series as such with the Time option the software will, after correcting for bleaching and variable background, properly deconvolve each 2D image as a 3D data set with one plane recorded and the rest missing, as explained here Is deconvolution on 2D or 2D-time images possible?.

Some File Formats having indexes in the file names are interpreted as 3D stacks by default: you may need to convert the dataset from XYZ to XYT once opened.

As an alternative you could write a Tcl script for Huygens Scripting or Huygens Professional to deconvolve the 2D-time series frame by frame. Still, your images would not be corrected for bleaching and varying background, a task automatically performed if you have the Time option.

In case of a 3D-time series, with the Time option your data will be also corrected for axial drift.

Yes. The results are especially remarkable for Widefield Microscopes. A 2D image recorded with a microscope can be considered as a slice from a 3D image. In this case the Huygens Software treats the data as a (severely) truncated 3D image, but 3D nonetheless. When deconvolving it Huygens attempts to reconstruct blur sources outside the slice and remove blur from it, regardless of the time frames.

Proper parameters

When doing 2D deconvolution of widefield images, set the Z Sample Size to the ideal Nyquist value. You can calculate this by using the Nyquist Calculator.

If you have a 2D time series, make sure Huygens interprets the series as such. Some File Formats having indexes in the file name are interpreted as 3D stacks by default: you may need to convert the dataset once opened from XYZ to XYT. See Convert The Data Set. The restoration might not be optimal otherwise, as explained in Can a 2D-time series be deconvolved as 3D stack?.

Yes, this is certainly possible in Huygens.

This can be done in Huygens Essential by choosing the option 'invert image' from the 'Tools' menu before starting the deconvolution. In Huygens Professional select the image and go to 'Deconvolution', 'Operations window', 'Arithmetic', 'One image', and then 'Invert'.

Then proceed to deconvolve the image. For brightfield images we strongly advice to use the linear Tikhonov Miller algorithm (available in Huygens Professional) as this algorithm does not amplify the background noise.

Brightfield imaging is not a 'linear imaging' process. In a linear imaging process the image formation can be described as the linear convolution of the object distribution and the point spread function (PSF), hence the name deconvolution for the reverse process. So in principle one cannot apply deconvolution based on linear imaging to non linear imaging modes like brightfield and reflection. Fortunately, in the brightfield case the detected light is to a significant degree incoherent. Because in that case there are few phase relations the image formation process is largely governed by the addition of intensities, especially if one is dealing with a high contrast image, 'linearizing' the problem. In short, a Bright Field Microscope is not exactly a linear imaging device, but can be made to behave almost like one.

Dr. Marcel Oberlaender et al. from the MPI of Neurobiology in Martinsried proved the validity of the Huygens deconvolution for brightfield data with the linear Tikhonov Miller algorithm.

In practice one goes about deconvolving brightfield images by inverting them and processing them further as incoherent fluorescence widefield images. Still, one should watch out for interference patterns (periodic rings and fringes around objects) in the measured image. These could become pronounced in low contrast images.

Using the Operations Window in Huygens Professional all channels are processed in one run. To work with only one channel proceed to split the multichannel image (Select Edit->Split or ALT-S) into single channel images. The single channel images can be deconvolved individually with specific parameter values. For example, the SNR can be edited in the "Signal/Noise per channel" input line of the Operations Window. The Join operation can be used after the deconvolution of single channel images to combine them into a multichannel image.


If using the Devonvolution Wizard of Huygens Professional or Essential, you can choose which channel you want to process and skip those you are not interested in.

Besides the Huygens deconvolution FAQ there is plenty of information on image acquisition and restoration, deconvolution, and the way the Huygens Software works available at the SVI site. Below, a number of links to help you get started:
It can be rigorously proven that when dealing with non-negative objects (fluorescing objects) the I-divergence criterion (MLE) is the only consistent choice, whereas for objects which can be both positive and negative (e.g. sound) least squares (ICTM) is the best choice. Another viewpoint is that I-divergence incorporates the Poisson nature of the emitted fluorescence light whereas least squares incorporates Gaussian noise. With this in mind one could expect that for low noise levels where the differences in the Poisson and Gaussian distributions are small there is no preference between the methods other than their computational efficiency. One reason why we still have a preference for the MLE algorithm is that it handles noisy backgrounds much better than the ICTM. A disadvantage of MLE is that it easily overemphasizes small structures, but we constrain this.

See also: MLE versus ICTM - Will one method be faster or more rigorous or give a better quality result?.

As a rule of thumb one gets easily 2x in Z and a bit less in XY. This applies to even noisy images. To gain more resolution a measured PSF is necessary. Some published figures (Bioimaging 4 1996 pp. 187-197): HIW (nm) z x y
  • raw bead image 790 270 265
  • restored 221 116 93
  • bead object function 83 83 83 (i.e. the `true' bead)
  • difference 138 33 10
Conclusion: nearly 4x in Z, more than 2x in XY.

The quality factor (QF) can only be used in a relative way, to compare between iterations, and is used for the stop criterion. The comparison of these values makes no sense if applied to deconvolutions of different images. The Quality Factor of the MLE algorithm is directly derived from the I-divergence; in the ICTM algorithm it is derived from the Tikhonov-Miller functional as described in the literature.

Sometimes one channel of the dataset contains Reflected light signal, i.e., the light which passes straight through various beam splitters and on into the Reflection Detector.
  1. Is there any validity in deconvolving this Reflected light component?
  2. If so what emission / excitation wavelength's would be applicable?
This is very tricky since reflected light is coherent and full of interference effects. On top of that quite some microscopes have serious trouble with interference between the signal and stray light. In short, deconvolving reflection images is not a good idea. If you still want to proceed, set the excitation and emission values to the wavelength of the reflected light.

Yes. Single plane widefield deconvolution works because the data is extrapolated into a region above and below the plane spanning typically between 10-20 planes of 100-300 nm in Z. The software generates an appropriate PSF.

The QMLE iterations are approximately 5 times more efficient than the CMLE iterations, whereas they also take slightly less time per iteration. So 10 QMLE iterations are equivalent to 50 iterations in CMLE. CMLE is superior in handling low Signal to Noise (SNR) data, like low light level confocal images. In principle the CMLE algorithm with an SNR setting > 60 converges to the same result as the QMLE algorithm, but after many more iterations. Also, for good quality widefield images QMLE is the best choice.

See Restoration Methods.

The deconvolution process increases the Dynamic Range of the dataset, i.e, the intensity range increases. A computer screen cannot display high intensities arbitrarily, there is a maximum for the brightness of a pixel in the screen. Therefore the maximum intensity in an image is normally mapped to the maximum intensity a screen can display, and all the other pixels in the image are scaled accordingly. Read more here.

People have done simulations with synthetic objects:
  • van der Voort, H.T.M and K.C. Strasters, "Restoration of confocal images for quantitative image analysis" JoMi 178, 1995, pp 165-181.
  • van Kempen, G.M.P. et al, "Comparing Maximum Likelihood Estimation and Constrained Tikhonov-Miller Restoration". IEEE Eng. in Med. And Biology 15 No 1. pp 76-83, 1996.
The extra microscopic parameters for STED images are needed to define the effect of the depletion laser and will improve the deconvolution. There are five microscopic parameters per channel which are only neccessary for STED images:
  • Excitation Fill Factor
  • Saturation factor
  • Wavelength
  • Immunity fraction
  • Shape coefficients
For more information about these parameters, see STEDDeconvolution


Miscellaneous

The voxel sizes are not changed by the restoration! The slicer takes the exact aspect ratio into account. No rounding off to integers.

Usually, when this happens you have run out of swap space. If you work with large images it is better to switch off the undo system. As a rule of thumb, bear in mind that 3 or 4 times the size of the float image can be reserved by Huygens out of the available RAM memory. The rest can be swapped out. When working with large images in Huygens then the disk I/O speed is an additional bottleneck. As a result, the time needed to swap pages in and out during an iteration determines computing speed.

Consider a 300MB dataset (~25Mvoxel image): the system has to write AND read *at least* 300MB per iteration. To speed up swapping it is advantageous to count on fast disks. Additionally, the current ICTM algorithm cannot process large datasets brick by brick. The CMLE and QMLE methods both can process datasets brick by brick. We recommend to use CMLE in general and QMLE in particular for low noise widefield data.

We have a set of batch scripts available on our webserver: See Batch Script. Notes:
  1. The script is able to launch multiple parallel jobs to make full usage of a multiple cpu system.
  2. These scripts use Huygens Scripting (license needed). Please ask for a 30 day license (info@svi.nl) if you'd like to test this. You can add the license to your system using the Help > License > Add License tool.
See also Batch Processor.

The multiphoton feature can be set up in the 'Excitation photon count' field in the Microscopic Parameter Editor. For a 2-photon system set this field to '2'. See Multi Photon Microscope.

These steps will help to lower the amount of memory needed for deconvolution:
  1. Switch off the undo system (Options menu).
  2. Reduce the size of the image (Crop tool). In particular, widefield images often contain many slices with blur.
  3. A theoretical PSF will be generated on-the-fly by the MLE and QMLE restoration methods. There's no need to create a theoretical PSF by hand.
When you convolve two images the resulting intensity is spread over the corners of the image. This phenomenon is caused by the following two properties of the FFT:
  • Images are interpreted as periodic, i.e. the image has infinite size but repeats itself in each dimension with periods that are equal to the size of the original image in the corresponding dimension.
  • The frequency origin (frequency space) is located at the zero voxel (bottom-left-plane) of the 3D image. The positive frequencies are located in the first octant of the image. Depending on the type of transform you selected, `complex' or `real', the negative frequencies are present in the other octants.
This means that you will find spatial frequency 0,0,0 at voxel (0,0,0) of the transform, and not at the center. For visualization purposes it is often desirable to have the zero frequency centered. Use To/from optic rep. from the Restoration menu in the Operations Window to move the zero frequency to the center.

`Real' Fourier transforms contain only the positive frequencies in the u-direction (with u, v, w , spatial frequencies corresponding with the x, y, z axis). However, they do contain negative frequencies in the other dimensions. For visualization purposes real-FFTs are therefore less suited than complex-FFTs.

As a consequence of defining the frequency origin (frequency space) at the (0,0,0) voxel, convolutions with funtions which are not centered around (0,0,0) will cause a shift in the resulting image. For example, when you convolve an image with a sphere located at the center ( xc , yc , zc ) of the second image, the result will be shifted over a vector ( xc , yc , zc ). Because images are interpreted as periodic all octants in the resulting image will appear `swapped'. You can prevent this from happening by using the following methods:
  • Center the image with which you are going to convolve around (0,0,0). When this image, for instance a sphere generated with Generate sphere , is centered use To/from optic rep. to shift it to the origin. In other cases use the following method:
  • (Alternative centering method) Determine the Center of Mass (CM) of the second image with Image statistics. Then use Shift image to move the CM to the origin. Since Shift image also interprets the image as periodic, this will produce the desired result. An advantage of this method is that you can shift the CM over a non-integer distance.
  • When you have convolved with a function located at the center of the image, you can undo the shifting effect by applying To/from optic rep. to the convolution result.
Yes, key publications are:
  • Csiszar, I., 1991; Why least squares and maximum entropy? An axiomatic approach to interference for linear inverse problems. Ann. Stat., 19, No. 4, pp. 2033-2066 (PDF 8 MB).
  • Kempen, G.M.P., van der Voort, H.T.M., 1006; Comparing Maximum Likelihood Estimation and Contrained Tikhonov-Miller Restoration. IEEE Engineering in Medicine and Biology vol 15, No 1, pp. 76-83

Contact Information

Scientific Volume Imaging B.V.

Laapersveld 63
1213 VB Hilversum
The Netherlands


Phone: +31 (0)35 64216 26
Fax: +31 (0)35 683 7971
E-mail: info at svi.nl

Image Image Image Image Image Image