Jump to content

  •  

* * * * *

Deep Sky Lucky Imaging


Discuss this article in our forums

December 30, 2019

 

Deep Sky Lucky Imaging

by

Robert E. Majewski Ph.D.

 

 

Atmospheric Seeing

 

Solar radiation is absorbed by the ground, which heats the air above.  The warm air rises and mixes.  Because the ground heating is uneven, variations in air temperature cause variations in air density.  Variations in air density result in variations in the index of refraction, which generates optical wavefront errors. Atmospheric seeing is characterized by the Fried parameter r0, the atmospheric coherence diameter. This parameter represents the typical spatial extent of a region of roughly constant wavefront phase at the telescope’s entrance aperture.  If r0 is smaller than the telescope’s diameter D, then the angular Full Width at Half Maximum (FWHM) of a star will be equal to ~ λ/r0 where λ is the wavelength. This coherence diameter varies with atmospheric conditions and depends of the elevation angle θ.  It also increases as a function of wavelength.  The Fried parameter varies

 

λ6/5 sin(θ)3/5

Thus, imaging near zenith will maximize the Fried parameter for various atmospheric conditions.  Also, longer wavelengths will increase r0.  In general having good seeing, (a large value of r0 ) is the most important consideration for high resolution imaging.  This is shown in figure 1.

 

The seeing conditions shown in figure 1 range from ideal “space” to average seeing.  These labels are somewhat subjective, a professional astronomer might classify what is labeled as excellent as merely good.  These curves were generated by convolving a telescope Point Spread Function (PSF) with a seeing blur.  The calculations were carried out in the spatial frequency domain and then Fourier transformed into the spatial domain. The telescope’s PSF was characterized using a modulation transfer function for telescopes with central obscurations, such as Newtonians and SCTs.  A wavelength of 500 nm was used for figure 1.  In space, the Fried parameter would be infinite and the FWHM values vary as

 

Typically for amateur astronomers, the Fried parameter varies from 0.1 meters (excellent) to 0.025 meters (poor).  For professional astronomers who have access to the world’s best observing sites r0 can reach to 0.2 meters or better.  Under these conditions, FWHM values of 0.5 arc-seconds can sometimes be seen.  However in space, a 20 inch telescope could deliver FWHM values of 0.2 arc-seconds!

 

What figure 1 shows is how important good seeing is for high resolution imaging.  Even with a site with excellent seeing, the FWHM values don’t decrease very much when the telescope aperture exceeds 8 inches.  This does not mean there is no value in using a larger telescope.  A larger telescope collects more photons per second.  This means for a given SNR, a bigger telescope will get you there in less time.

 

Figure 2 shows how the star FWHM varies as a function of wavelength for an 8 inch telescope.  In space, an 8 inch telescope could deliver a FWHM of 0.4 arc-seconds at a wavelength of 400 nm.  In the near-infrared at 1000 nm, the FWHM would be around 1 arc-second.  The situation is rather different when atmospheric turbulence is present.  Since r0 increases as a function of wavelength, the net effect is to counteract the increasing FWHM values seen in space.  So the FWHM values are to a first approximation relatively constant as a function of wavelength.  At larger levels of turbulence, there is seen a slight decrease in FWHM values at long wavelengths.  So under  conditions of poor seeing, imaging in the near-infrared will show some benefit.

 


Figure 1 FWHM as a function of telescope diameter for a number of seeing conditions. The r0 values are in meters.

 


Figure 2 FWHM as a function of wavelength for various seeing conditions for an 8 inch telescope.

 

Plate scale affects the overall FWHM.  In addition to controlling the image sampling, the angular size of the pixels also represents spatial integration of the light reaching the focal plane and thus effects resolution.  The plate scale is the width of a detector divided by the focal length of the optical system in arc-seconds.  For example, if the detector pixels are 2.4 microns wide and the focal length of the OTA is one meter, the plate scale is 0.5 arc-seconds per pixel.

 

 


Figure 3 FWHM as a function of plate scale for an 8 inch telescope.

 

Figure 3 shows the effect of plate scale on the overall FWHM.  The larger r0 the greater the effect of having a smaller plate scale.  However, don’t be misled by thinking that a very small plate scale is a great idea for high resolution imaging, because the SNR will take a big hit.  It will require very long exposures in order to obtain a reasonable SNR.  So there is a tradeoff here between lowering the over FWHM and obtaining a good SNR.   If the seeing is poor, a larger plate scale is a good idea.  If however the seeing is excellent, than using a camera with smaller pixels may help with getting high resolution imagery.

 


Figure 4 FWHM as a function of mount tracking jitter for various conditions of seeing for an 8 inch telescope with 0.55 arc-second pixels.

 

In order to obtain high resolution imagery, the mount tracking jitter occurring during the exposure time should be minimized.  Figure 4 shows the effect of mount jitter occurring during an exposure.  Note since the FWHM values are a full width and the jitter is the standard deviation from the average position, the effect of jitter is larger then one might expect.  Having low jitter is more important if seeing conditions are excellent.  The use of high resolution encoders or short guider exposures will help here.

 


Figure 5 FWHM as a function of defocus for an 8 inch telescope with 0.55 arcsecond pixels.

 

In order to obtain high resolution imagery, good focus is required.  Figure 5 shows the effect of defocus for 3 seeing cases.  The optical system should be focussed to better than a 1/4 waves of defocus.

 

1/4 wave defocus =  2  f/ 2

 

For example, if the telescope has an f number of 5, at a wavelength of 500 nm, the quarter wave defocus tolerance would be  25 microns or  0.025 mm.

 

 

 

 

Lucky Imaging for DSOs

 

Amateur astronomers have been using lucky imaging for planetary targets with great success.  The basic strategy is to obtain a large number of short exposure images in order to “freeze” the turbulence and then select the sharpest images for alignment and integration.  Short exposures are valuable because the biggest effect of turbulence for small telescopes is image jitter.  For long exposures, the image jitter gets averaged out, just generating a blurred image.  The problem with doing this with deep sky objects is that the number of photons collected in a 1/60 of a second is rather small in most cases.  This makes it hard to determine an image shift accurately.  However lucky imaging can still help sharpen DSO imagery.

 

The Fried parameter is a random variable.  It varies with the seasons, daily and minute to minute.  If turbulence arises mostly at high altitudes, the FWHM values of stars will be largely decorrelated, if they are separated by more than ~10 arc-seconds.  This means that computing the average FWHM value of a lot of stars in the field of view would not vary much.  In this case, it would not be very useful to use this parameter to select “good” frames for image integration.  However, for typical amateur astronomy observing sites, a large amount of the turbulence is generated in the surface layer near the ground or even in the telescope itself.  Houses, trees, fences etc can cause the prevailing winds to produce turbulence within ~ 10 meters of the ground.  This turbulence will be highly correlated over a much larger field of view ~ 0.5 degrees.  This means that using an average FWHM of many stars will be indicative of the seeing condition for the entire field of view.  By selecting images with average low star FWHM values, one can obtain sharper imagery.  Computing a median instead of the mean is a better idea, so that we can reject the effect of hot pixels in this calculation.  Measurements by Nicholas Short, Walt Fitelson and Charles H. Townes at Mount Wilson showed that the turbulence at their site was correlated for time periods up to 15 seconds after which the turbulence changed.  This suggests that short exposures up to 15 seconds should work well for DSO lucky imaging.  Long exposures like 10 minutes will tend to average over seeing variations and result in less sharp imagery.

 

Taking short exposures is also valuable here because it increases the chances of catching periods of low turbulence.  CMOS cameras with USB 3 interfaces work well because of their low readout noise and fast download speeds.  Low readout noise means that a lot of short exposures can be used without taking a larger SNR penalty.

 


Figure 6 The minimum exposure times for CMOS and CCD cameras as a function of sky brightness. The sky backgrounds are in magnitude / square arc-second for the luminance band 400-700 nm.

 

Figure 6 shows what a game changer low readout noise CMOS cameras are for taking short exposures.  The graph was calculated by setting the sky background noise equal to the camera readout noise.  This graph is for an 8 inch telescope with a plate scale of 0.74 arc-seconds per pixel and a camera QE of 60%. In general, we want the readout noise to be small compared with the photon shot noise coming from the sky.  Therefore one would like to use longer exposures than the ones shown in figure 6 in order to maximixe the SNR.  At very dark sites, long exposures should be used in order to take advantage of the low sky background photon flux.  Except in light polluted locations, exposures with CCD cameras should be greater than 10 seconds.  At dark sites, an exposure of 2 minutes or more would be the best for many CCD cameras.  For CMOS cameras like the ASI1600MM Pro, exposures as short as ~ 5 seconds can be taken even at very dark sites with out a significant SNR loss!

 

In order to get the low readout noise values, CMOS cameras should be run at high gain.  This creates a penalty because the dynamic range at maximum gain is rather limited.  For example, with the ASI1600MM Pro at max gain (a setting of “300”) the full well-depth is only ~ 600 photo-electrons.  This is where having a 32 bit virtual image buffer comes into play.  The camera well-depth may be small, but the virtual image buffer has plenty of room for thousands of exposures to be integrated without any concern about saturation.  So it is possible to have both low readout noise and a wide dynamic range!

 

 

 

 

Equipment and Methodology

 

 

The imagery in the examples below were captured using a 8 inch Newtonian telescope on a Losmandy G11 mount.  The mirror has a focal ratio of 4.5 and was fabricated by the Zambuto Optical Company.  A Televue Paracorr 2 coma corrector was used to obtain a ~ 1/2 degree diffraction limited field of view.  The coma corrector resulted in an overall system focal ratio of 5.  The Telescope Drive Master was used to reduce mount periodic error down to ~ 0.4 arc-seconds RMS.  The cameras used were 2 CMOS cameras from ZWO, an ASI1600MM Pro and a ASI183MM Pro.  The filters used were Astrodon series e LRGB filters.   Focusing was accomplished using a Moonlite focuser driven with by an Arduinio electronics package.

 

The imagery was captured using software written by the author.  The software made use of Image Autoguiding and had a real-time 32 bit floating point virtual image buffer.  The floating point buffer gives the capability of a dynamic range much larger than that of the camera alone.  120 5 second exposures were captured in order to create one “sub” that was saved in a FITS file.  Lucky Imaging was implemented by setting a FWHM threshold.  Images with a median FWHM values less than the threshold were summed to the virtual image buffer after alignment.  Images with FWHM higher than the FWHM threshold were discarded.  Dithering was also used by varing the position of the reference image for each 10 minute sub (but not the 5 second short exposures).

 

 

 

 

Image Autoguiding

 

Image Autoguiding replaces the centroid calculation on a single guide star with a position shift calculation that uses the entire image or large portions of it. Image Autoguiding is an algorithm developed by the author that makes use of Fourier analysis to measure shifts between imagery to sub-pixel accuracy. A reference image is obtained at the start of the guiding process and then aim point shifts are determined from the imagery obtained afterward. In addition, the image shift obtained was used for real-time alignment and integration.  No guide stars were used!

 

The primary degrading effect of atmospheric seeing is position jitter for amateur astronomy sized telescopes. The optical point spread function moves around with a range of frequencies that depends on how rapidly turbulent cells move across the telescope's entrance aperture. This is a large source of measurement error for centroid tracking with short exposures. Since Image Autoguiding is using all or most of the stars across the FOV, the effect of atmospheric seeing as an autoguiding error source is significantly reduced.  Most of the image jitter from star to star is generated from high altitude turbulence.  This jitter is decorrelated for stars more than ~ 10 arc-seconds apart.  Thus Image Autoguiding will greatly reduce the effect of the high altitude jitter from the image shift measurement.  The surface layer turbulence mostly jitters the entire field of view around.  This overall image jitter is measured and corrected by real-time alignment and integration.

 

 

 

Examples

 

 

M 13

 

The great globular star cluster in Hercules M13.   This image was obtained using a 8 inch Newtonian telescope with 5 second exposures.

 

 

 

 

NGC 7331

 

A spiral galaxy in Pegasus.   This image was obtained using a 8 inch Newtonian telescope with 5 second exposures.

 

 

 

 

 

M51

 

The Whirlpool spiral galaxy in Ursa Major.  This image was obtained using a 8 inch Newtonian telescope with 5 second exposures.

 

 

 

 

References

 

1.     J. D Jackson,  Classical Electrodynamics. (Wiley, New York, 1975).

2.     M. Born and E. Wolf, Principles of Optics (Pergamon Press 1986).

3.     J. W. Goodman, Introduction to Fourier Optics. (McGraw-Hill, San Francisco, 1968).

4.     J. W Goodman Statistical Optics (Wiley, New York, 1985).

5.     P. Y. Bely, The Design and Construction of Large Optical Telescopes. (Springer, New York, 2010).

6.     D. J. Schroeder Astronomical Optics (Academic Press, 2000).

7.     R. K. Tyson, Introduction to Adaptive Optics, (SPIE Press).

8.     D. L. Freid, Optical Resolution Through a Randomly inhomogeneous medium for Very Long and Very Short Exposures, (J Opt Soc. Am, 56, 1372 1966).

9.     R. E. Majewski, Image Autoguiding Autoguiding Without Guide Stars, (Astronomy Technology Today, Vol 10, issue 9, 2016, page 71).

10.  Nicholas Short, Walt Fitelson, and Charles H. Townes, Atmospheric Turbulence Measurements at Mount Wilson Observatory, The Astrophysical Journal, 599: 1469-1477, 2003 December 20.

 

 


  • H-Alfa, eros312, CharLakeAstro and 2 others like this


0 Comments



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics