Loading...
 
Skip to main content

Object Size

The definition of 'Object Size' can be ambiguous, depending on the geometry of the objects, and which dimension you are looking at. In the Huygens Object Analyzer, it is possible to analyze various geometrical properties for each object defined by the segmentation.

For small particles close to the difraction limit, there are some additional challenges, since the size of the PSF will have a large effect on the overall size of the objects. For such analysis, you could also use a different approach, such a FWHM value or a fitting algorithm. If the objects are large enough and are labeled on the outside-only, you could also use a peak-to-peak intensity measurement in a cross section of the object, for example in the Twin-Slicer.

In all cases, deconvolving your images before doing object analysis is very much recommended. Since the Huygens deconvolved result is a maximum-likelihood model of your object(s): measurements on the object size, shape and position will be much more reliable compared to measuring on raw data alone.

There will always be a (practical) limit as to how far you can go with small size measurements and conventional light microscopy. The obvious limit is of course directly related to the diffraction limit. Using MLE deconvolution you will be able to restore and measure sizes below the conventional diffraction limit, but it will still be limited nevertheless.

In addition, there are some other practical things to consider when analyzing small objects:

1.) For widefield: what is your camera pixel size? And what is the final image pixel size? Is your image pixel size matching Nyquist? This is extremely important for good deconvolution and analysis results. If you use scanning confocal, then you make your pixel size as small as needed to reach Nyquist rate, but I can imagine that you might also run into temporal problems (movement of the objects)

2.) How are the objects labeled? The size of the fluorophore labeled (antibody) complex should also be taken into consideration (20nm on a 200nm object size is already 10%)

3.) Which wavelengths do you use? Since the diffraction limit scales linear with the wavelength, it would be useful to image in the low wavelength range (near-UV/blue), if practically possible.

4.) What is the concentration of the small objects in your field of view? When there is a large concentration of small objects accross the field of view, then there is a high probability that there are multiple objects inside one diffraction limited spot. When they are really close (< 140nm), then it will become very challenging -if not impossible- to separate them with any fitting or deconvolution algorithm, without using a super-resolution imaging technique.

5.) Movement of the objects and temporal resolution When doing live cell imaging, you will inevitably suffer from movement issues. Even for widefield imaging, which is relatively fast, the objects might move slightly in the time it takes to image a single time-frame. This will cause motion blur, which will be difficult to compensate for if the movement is random for each object. Note that stage drift, or cell movement which are slower than the frame rate, can be corrected with the Object Stabilizer.