Measuring PSF questions
This is a check in the Reconstruct PSF tool to prevent the cones (fans) of WF PSFs to be cut off too asymmetrically. Also, this effect can be caused by large spherical aberration making the PSF (and therefore the bead images) highly asymmetrical. This check was built in the software to make people aware of sub-optimal properties of their data which should be corrected during acquisition.
For more information you can have a look at DeconvolvingBeads
See also Color Shift.
To assign a microscope type to the image:
- select the image
- choose "Edit Parameters" or press Alt-P
- change the field "Microscope Type" from "Generic" to the correct microscope type
- save the changes.
But it may also mean that the image is indeed having not enough information about the PSF along the optical axis Z. The recorded volume around beads, specially in widefield microscopes, must be large enough to containg information about the light cone. If this is the case, better image the beads again, maybe changing the sampling density and recording more planes. Read more at Recording Beads.
See also Psf Distiller.
It turns out that in practice recorded PSFs are often too shallow, so we have fitted out the reconstruct tool with a powerful extrapolator and a manual Z-size setting. Provided the extrapolator has a big enough foothold it can extrapolate to whatever size you need. Rule of thumb for the foothold is 5 micron. Ideally the extrapolator is called as part of the preprocessing phase of the restoration tools, but because it is rather lengthy we put it in the reconstructor. We did put a light-weight extrapolator in the restoration tools to be able to extend a PSFs a little bit.
See also Recording Beads.
To compare a bead image to a theoretical PSF proceed as follows:
- Open an operation window on the bead or averaged bead image
- Choose a free destination image
- Click the PSF button and set Dimensions to Parent
- Click Run to generate a theoretical PSF of which the size matches the size of the bead image
If you like you can oversample the beads at say 2x the Nyquist rate in the lateral direction. In general it is best to really match the Nyquist criterion (or better) in z since highest resolution gain is in Z. It also depends on the capabilities of the z-stepper. If you use finer 170nm beads instead of 230nm beads (and get sufficient signal), so much the better. It has no impact on the sampling density. Of course there is a relation between the Nyquist rate and the bead size, but it is a weak one. Literature: van der Voort HTM and Strasters KC (1995) Restoration of confocal images for quantitative image analysis. J.Micr. Vol 178, pp 43-54.
There are two problems, though:
- In some microscopes the magnification at high zoom is unreliable, with errors up to 30%.
- In case of widefield data, the Nyquist sampled PSF might be too small (in terms of microns) to be used in deconvolving physically large data. Make sure you specify, during the PSF distillation, a large enough required size for the final PSF. However, in extreme cases memory limits might get in the way.
If you'd like to obtain confocal-like single slice images the best procedure is to acquire a short stack of 10-20 slices around the plane of interest and deconvolve that. However, if you lack a z-drive or the time to acquire the stack Huygens-Pro and Huygens Essential also allow you to deconvolve a single 2D widefield image.
One could argue that very small beads(<25nm) have ideal spectral content, but up to now such small objects lack signal strength. Averaging small beads doesn't work either since the limited signal strength limits the precision of alignment procedure. This situation might change when quantum dots become available for PSF measurement.
Why is the Nyquist Rate sampling so relevant for deconvolution? The (degrading) imaging process acts at the scale of the PSF, and therefore this must be precisely acquired in order to restore the image properly. See Ideal Sampling for more details. Therefore, in any case, the beads for PSF acquisition should be imaged with a Sampling Density at least according to the Nyquist Rate, or even better. Like that the PSF would contain all the information about the imaging properties of the microscope, and can be adapted to other imaging conditions that are slightly undersampled. See also the FAQ What is the maximal voxel size at which Huygens can still do a good job?.
PSF measurement
The Huygens Software will reject beads that are severely undersampled if you try to distill a PSF from them, because in that case they do not contain the necessary information to do it! The Nyquist Rate is the minimum sampling required for a proper PSF measurement. Oversampling the bead image can be a good idea (it increases the Signal To Noise Ratio of this fundamental image), but in practice this is not possible in the widefield case, because the image would be too large. Because other microscope's PSF are smaller, you can afford some oversampling there. If possible limit the differences in sampling density to factors 2 or 3, thus making the later scaling of the PSF easier and more precise.
In practice (and with good signal) it is not necessary to sample finer than 25 nm lateral and 100 nm axial for confocal systems or 50 nm lateral and 100 nm axial for widefield systems. Fair numbers in a typical confocal case are 50 nm lateral and 150 nm axial.
Caveat: at high zoom factors the magnification as reported by the microscope is not always reliable.
In the widefield case best record with no Pixel Binning. This usually results in a lateral sampling density in the 67-100 nm range. Axial sampling should match the sampling of the specimen if it is below 250 nm.
The Nyquist Rate (similar to the Shannon theorem) says that IF a signal is bandlimited (see our FAQ What's a bandlimited system?), it is sufficient to sample it at twice the highest frequency. Then, it is possible to reconstruct the signal at ALL locations, perfectly. So in principle it is sufficient to sample at the Nyquist rate. Taking more samples does not get you more information about the object. In short, the ideal sampling rate is not infinite. Still, taking more samples with the same number of photons per pixel will improve the quality of the deconvolution result. Vice versa, taking more samples allows you to achieve the same quality in the deconvolution result at lower photon counts per pixel. BTW: If you sample below the Nyquist rate you get Aliasing Artifacts (moire patterns, straircasing).
One more reason to oversample is that with sparse objects and good SNR it is often possible to achieve a Half Intensity Width resolution on the objects corresponding with a Band Width in excess of the microscope's bandwidth. The objects are then said to be super resolved. The Shannon theorem says it doesn't matter whether you get the supersampled image during sampling or afterwards by interpolation, but it is more practical to get it during sampling, if only to improve the SNR situation.
A different matter is two-point Spatial Resolution: separating two objects. It is very hard to separate two objects reliably at distances smaller than the Nyquist distance.
In multi-channel images the different channels are often shifted with respect to each other. By NOT centering the PSF this shift can be automatically undone by the deconvoluttion, and with sub-pixel accuracy. On the other hand, the shift can be just as well done manually with by the shift tool, also with sub-pixel accuracy. This is perhaps the most practical method.
- Ideal: The size (in microns) of the generated PSF is determined from the physical size. Since the fringes of a PSF go on forever it is in theory not spatially limited, but in practice a volume can be chosen beyond which there is a negligible amount of energy from the PSF. The exception is the widefield PSF: there the amount of energy outside a volume around the focus is always infinite... As the intensity goes down locally it is possible to find a point at which the intensity of the PSF is well below the accuracy of any camera. Still, ideal WF PSFs are much larger than confocal or 2-photon ones.
- Parent/Padded Parent/Full Padded Parent: The size is derived from the Parent image: either exactly the same as the parent in fact no padding is made here, or as large as if the parent was 'padded'. The extra volume computed by the software is a trade-off between FFT (Fast Fourier Transform) compute efficiency and the size of the original image. For example If you have 31 layers in your image, adding one layer would optimize the Fourier Transform process. But adding one layer is not enough to prevent wrap around effects. The software will find out how many layers extra is a good compromise. The Fully padded parent mode is relevant for widefield images, for other microscope types this is equivalent to Padded Parent. If PSFs are to be compared it is best to use 'Parent' because that will fix the size.
- Automatic: A tradeoff is made between the physical size of the PSF and the memory requirement. In practice confocal or 2-photon ones are ideally sized; WF PSFs are smaller than ideal but at least as large as the padded parent.
- Manual: You can manual set the number of Z-slices in this mode using the input field "Min XY-slices (Manual)". Widefield images should never be padded manually.
Yes, this is the problem. Even worse: the thicker you make the stack the wider they become so the tops of the cones will tangle. That will really mess up the PSF. Best way to go is to set 'reduce PSF size', for instance to 2 (=high), and reduce the number of beads to a couple, even just one should be ok for widefield. Starting from Huygens version 2.16 it is less a problem if the PSF is somewhat truncated because of the build-in PSF extrapolation. This is also true for confocal PSFs which tend to be truncated always.
What happens when you use a small bead with slightly higher r.i. than the surrounding medium to measure the PSF that it will be not quite homogeneously excited. I think this effect on beads with a size approximately equal to the size of the diffraction spot in XY and so much smaller than the diffraction spot in Z can be neglected. The emitted light from within the sphere might seem to come from outside the sphere because of the sphere acting as a lens, but this is at most something in the order of diameter*( RI_bead/RI_lens -1) outside the sphere, so something like 10nm, also neglectable.
- Close images you are not immediately working on and turn off the Undo system.
- Use the crop tool to crop the data as much as possible, especially in the Z direction.
- Use either QMLE or CMLE to deconvolve the image. If possible both will split the data into bricks and generate PSFs matching the bricks. In this way the computation of a single huge PSF is avoided. Currently, widefield images can only be processed brick wise if the data is sufficiently shallow compared to the NA. This is the case when the base of the aperture cone as truncated by the upper and lower planes is far smaller than the lateral extent of the data. This is often true, but overestimated NAs can spoil it. It helps to cut off as many z-planes as possible since this not only reduces the data size, but also allows more efficient brick cutting.
- Lastly, make sure your system has sufficient swap space.
Also Invitrogen and Thermo Scientific have good beads available. See also under "Practical Beads" on this wiki page RecordingBeads to find a list of available beads.
gensphere
command generates a so called "bandlimited sphere", i.e. a geometrical sphere with no spatial frequencies above half of the sampling rate that you intent to use. If a perfect sphere should be used an unlimited number of spatial frequencies is involved and aliasing artifacts are generated due to Nyquist sampling violation. The ringing is a result of the sphere being bandlimited, i.e. perfectly antialiased. Removing the rings would mean corrupting the spatial frequency content which in turn would lead to a sub-optimal measured PSF. Because it is later convolved with a similarly bandlimited but smoothly rolling off PSF it is doesn't matter.