
Deep Sky Lucky Imaging
Discuss this article in our forums
December 30, 2019
Deep Sky Lucky Imaging
by
Robert E. Majewski Ph.D.
Atmospheric Seeing
Solar radiation is absorbed by the ground, which heats the air above. The warm air rises and mixes. Because the ground heating is uneven, variations in air temperature cause variations in air density. Variations in air density result in variations in the index of refraction, which generates optical wavefront errors. Atmospheric seeing is characterized by the Fried parameter r0, the atmospheric coherence diameter. This parameter represents the typical spatial extent of a region of roughly constant wavefront phase at the telescope’s entrance aperture. If r0 is smaller than the telescope’s diameter D, then the angular Full Width at Half Maximum (FWHM) of a star will be equal to ~ λ/r0 where λ is the wavelength. This coherence diameter varies with atmospheric conditions and depends of the elevation angle θ. It also increases as a function of wavelength. The Fried parameter varies
λ6/5 sin(θ)3/5
Thus, imaging near zenith will maximize the Fried parameter for various atmospheric conditions. Also, longer wavelengths will increase r0. In general having good seeing, (a large value of r0 ) is the most important consideration for high resolution imaging. This is shown in figure 1.
The seeing conditions shown in figure 1 range from ideal “space” to average seeing. These labels are somewhat subjective, a professional astronomer might classify what is labeled as excellent as merely good. These curves were generated by convolving a telescope Point Spread Function (PSF) with a seeing blur. The calculations were carried out in the spatial frequency domain and then Fourier transformed into the spatial domain. The telescope’s PSF was characterized using a modulation transfer function for telescopes with central obscurations, such as Newtonians and SCTs. A wavelength of 500 nm was used for figure 1. In space, the Fried parameter would be infinite and the FWHM values vary as
Typically for amateur astronomers, the Fried parameter varies from 0.1 meters (excellent) to 0.025 meters (poor). For professional astronomers who have access to the world’s best observing sites r0 can reach to 0.2 meters or better. Under these conditions, FWHM values of 0.5 arc-seconds can sometimes be seen. However in space, a 20 inch telescope could deliver FWHM values of 0.2 arc-seconds!
What figure 1 shows is how important good seeing is for high resolution imaging. Even with a site with excellent seeing, the FWHM values don’t decrease very much when the telescope aperture exceeds 8 inches. This does not mean there is no value in using a larger telescope. A larger telescope collects more photons per second. This means for a given SNR, a bigger telescope will get you there in less time.
Figure 2 shows how the star FWHM varies as a function of wavelength for an 8 inch telescope. In space, an 8 inch telescope could deliver a FWHM of 0.4 arc-seconds at a wavelength of 400 nm. In the near-infrared at 1000 nm, the FWHM would be around 1 arc-second. The situation is rather different when atmospheric turbulence is present. Since r0 increases as a function of wavelength, the net effect is to counteract the increasing FWHM values seen in space. So the FWHM values are to a first approximation relatively constant as a function of wavelength. At larger levels of turbulence, there is seen a slight decrease in FWHM values at long wavelengths. So under conditions of poor seeing, imaging in the near-infrared will show some benefit.
Figure 1 FWHM as a function of telescope diameter for a number of seeing conditions. The r0 values are in meters.
Figure 2 FWHM as a function of wavelength for various seeing conditions for an 8 inch telescope.
Plate scale affects the overall FWHM. In addition to controlling the image sampling, the angular size of the pixels also represents spatial integration of the light reaching the focal plane and thus effects resolution. The plate scale is the width of a detector divided by the focal length of the optical system in arc-seconds. For example, if the detector pixels are 2.4 microns wide and the focal length of the OTA is one meter, the plate scale is 0.5 arc-seconds per pixel.
Figure 3 FWHM as a function of plate scale for an 8 inch telescope.
Figure 3 shows the effect of plate scale on the overall FWHM. The larger r0 the greater the effect of having a smaller plate scale. However, don’t be misled by thinking that a very small plate scale is a great idea for high resolution imaging, because the SNR will take a big hit. It will require very long exposures in order to obtain a reasonable SNR. So there is a tradeoff here between lowering the over FWHM and obtaining a good SNR. If the seeing is poor, a larger plate scale is a good idea. If however the seeing is excellent, than using a camera with smaller pixels may help with getting high resolution imagery.
Figure 4 FWHM as a function of mount tracking jitter for various conditions of seeing for an 8 inch telescope with 0.55 arc-second pixels.
In order to obtain high resolution imagery, the mount tracking jitter occurring during the exposure time should be minimized. Figure 4 shows the effect of mount jitter occurring during an exposure. Note since the FWHM values are a full width and the jitter is the standard deviation from the average position, the effect of jitter is larger then one might expect. Having low jitter is more important if seeing conditions are excellent. The use of high resolution encoders or short guider exposures will help here.
Figure 5 FWHM as a function of defocus for an 8 inch telescope with 0.55 arcsecond
pixels.
In order to obtain high resolution imagery, good focus is required. Figure 5 shows the effect of defocus for 3 seeing cases. The optical system should be focussed to better than a 1/4 waves of defocus.
1/4 wave defocus = 2
f/ 2
For example, if the telescope has an f number of 5, at a wavelength of 500 nm, the quarter wave defocus tolerance would be 25 microns or
0.025 mm.
Lucky Imaging for DSOs
Amateur astronomers have been using lucky imaging for planetary targets with great success. The basic strategy is to obtain a large number of short exposure images in order to “freeze” the turbulence and then select the sharpest images for alignment and integration. Short exposures are valuable because the biggest effect of turbulence for small telescopes is image jitter. For long exposures, the image jitter gets averaged out, just generating a blurred image. The problem with doing this with deep sky objects is that the number of photons collected in a 1/60 of a second is rather small in most cases. This makes it hard to determine an image shift accurately. However lucky imaging can still help sharpen DSO imagery.
The Fried parameter is a random variable. It varies with the seasons, daily and minute to minute. If turbulence arises mostly at high altitudes, the FWHM values of stars will be largely decorrelated, if they are separated by more than ~10 arc-seconds. This means that computing the average FWHM value of a lot of stars in the field of view would not vary much. In this case, it would not be very useful to use this parameter to select “good” frames for image integration. However, for typical amateur astronomy observing sites, a large amount of the turbulence is generated in the surface layer near the ground or even in the telescope itself. Houses, trees, fences etc can cause the prevailing winds to produce turbulence within ~ 10 meters of the ground. This turbulence will be highly correlated over a much larger field of view ~ 0.5 degrees. This means that using an average FWHM of many stars will be indicative of the seeing condition for the entire field of view. By selecting images with average low star FWHM values, one can obtain sharper imagery. Computing a median instead of the mean is a better idea, so that we can reject the effect of hot pixels in this calculation. Measurements by Nicholas Short, Walt Fitelson and Charles H. Townes at Mount Wilson showed that the turbulence at their site was correlated for time periods up to 15 seconds after which the turbulence changed. This suggests that short exposures up to 15 seconds should work well for DSO lucky imaging. Long exposures like 10 minutes will tend to average over seeing variations and result in less sharp imagery.
Taking short exposures is also valuable here because it increases the chances of catching periods of low turbulence. CMOS cameras with USB 3 interfaces work well because of their low readout noise and fast download speeds. Low readout noise means that a lot of short exposures can be used without taking a larger SNR penalty.
Figure 6 The minimum exposure times for CMOS and CCD cameras as a function of sky brightness. The sky backgrounds are in magnitude / square arc-second for the luminance band 400-700 nm.
Figure 6 shows what a game changer low readout noise CMOS cameras are for taking short exposures. The graph was calculated by setting the sky background noise equal to the camera readout noise. This graph is for an 8 inch telescope with a plate scale of 0.74 arc-seconds per pixel and a camera QE of 60%. In general, we want the readout noise to be small compared with the photon shot noise coming from the sky. Therefore one would like to use longer exposures than the ones shown in figure 6 in order to maximixe the SNR. At very dark sites, long exposures should be used in order to take advantage of the low sky background photon flux. Except in light polluted locations, exposures with CCD cameras should be greater than 10 seconds. At dark sites, an exposure of 2 minutes or more would be the best for many CCD cameras. For CMOS cameras like the ASI1600MM Pro, exposures as short as ~ 5 seconds can be taken even at very dark sites with out a significant SNR loss!
In order to get the low readout noise values, CMOS cameras should be run at high gain. This creates a penalty because the dynamic range at maximum gain is rather limited. For example, with the ASI1600MM Pro at max gain (a setting of “300”) the full well-depth is only ~ 600 photo-electrons. This is where having a 32 bit virtual image buffer comes into play. The camera well-depth may be small, but the virtual image buffer has plenty of room for thousands of exposures to be integrated without any concern about saturation. So it is possible to have both low readout noise and a wide dynamic range!
Equipment and Methodology
The imagery in the examples below were captured using a 8 inch Newtonian telescope on a Losmandy G11 mount. The mirror has a focal ratio of 4.5 and was fabricated by the Zambuto Optical Company. A Televue Paracorr 2 coma corrector was used to obtain a ~ 1/2 degree diffraction limited field of view. The coma corrector resulted in an overall system focal ratio of 5. The Telescope Drive Master was used to reduce mount periodic error down to ~ 0.4 arc-seconds RMS. The cameras used were 2 CMOS cameras from ZWO, an ASI1600MM Pro and a ASI183MM Pro. The filters used were Astrodon series e LRGB filters. Focusing was accomplished using a Moonlite focuser driven with by an Arduinio electronics package.
The imagery was captured using software written by the author. The software made use of Image Autoguiding and had a real-time 32 bit floating point virtual image buffer. The floating point buffer gives the capability of a dynamic range much larger than that of the camera alone. 120 5 second exposures were captured in order to create one “sub” that was saved in a FITS file. Lucky Imaging was implemented by setting a FWHM threshold. Images with a median FWHM values less than the threshold were summed to the virtual image buffer after alignment. Images with FWHM higher than the FWHM threshold were discarded. Dithering was also used by varing the position of the reference image for each 10 minute sub (but not the 5 second short exposures).
Image Autoguiding
Image Autoguiding replaces the centroid calculation on a single guide star with a position shift calculation that uses the entire image or large portions of it. Image Autoguiding is an algorithm developed by the author that makes use of Fourier analysis to measure shifts between imagery to sub-pixel accuracy. A reference image is obtained at the start of the guiding process and then aim point shifts are determined from the imagery obtained afterward. In addition, the image shift obtained was used for real-time alignment and integration. No guide stars were used!
The primary degrading effect of atmospheric seeing is position jitter for amateur astronomy sized telescopes. The optical point spread function moves around with a range of frequencies that depends on how rapidly turbulent cells move across the telescope's entrance aperture. This is a large source of measurement error for centroid tracking with short exposures. Since Image Autoguiding is using all or most of the stars across the FOV, the effect of atmospheric seeing as an autoguiding error source is significantly reduced. Most of the image jitter from star to star is generated from high altitude turbulence. This jitter is decorrelated for stars more than ~ 10 arc-seconds apart. Thus Image Autoguiding will greatly reduce the effect of the high altitude jitter from the image shift measurement. The surface layer turbulence mostly jitters the entire field of view around. This overall image jitter is measured and corrected by real-time alignment and integration.
Examples
M 13
The great globular star cluster in Hercules M13. This image was obtained using a 8 inch Newtonian telescope with 5 second exposures.
NGC 7331
A spiral galaxy in Pegasus. This image was obtained using a 8 inch Newtonian telescope with 5 second exposures.
M51
The Whirlpool spiral galaxy in Ursa Major. This image was obtained using a 8 inch Newtonian telescope with 5 second exposures.
References
1. J. D Jackson, Classical Electrodynamics. (Wiley, New York, 1975).
2. M. Born and E. Wolf, Principles of Optics (Pergamon Press 1986).
3. J. W. Goodman, Introduction to Fourier Optics. (McGraw-Hill, San Francisco, 1968).
4. J. W Goodman Statistical Optics (Wiley, New York, 1985).
5. P. Y. Bely, The Design and Construction of Large Optical Telescopes. (Springer, New York, 2010).
6. D. J. Schroeder Astronomical Optics (Academic Press, 2000).
7. R. K. Tyson, Introduction to Adaptive Optics, (SPIE Press).
8. D. L. Freid, Optical Resolution Through a Randomly inhomogeneous medium for Very Long and Very Short Exposures, (J Opt Soc. Am, 56, 1372 1966).
9. R. E. Majewski, Image Autoguiding Autoguiding Without Guide Stars, (Astronomy Technology Today, Vol 10, issue 9, 2016, page 71).
10. Nicholas Short, Walt Fitelson, and Charles H. Townes, Atmospheric Turbulence Measurements at Mount Wilson Observatory, The Astrophysical Journal, 599: 1469-1477, 2003 December 20.
- H-Alfa, eros312, CharLakeAstro and 2 others like this
9 Comments
Thank you. Excellent article for those of us that want to dig into and understand the science a bit.
The results of DSO lucky imaging can be very stunning. In Oldenburg, Germany, Carsten Dosche uses a very sensitive Andor EMCCD camera together with his C9.25 and gets remarkable results at 1 second exposures under far from optimal seeing conditions. Occasionally, he also could use the GSO 16 RC of René Rogge at Leerhafe. Check out Carsten's astrobin site.
Juergen
Incredible article.
Thank you for that very interesting article!
I did some experiments early last year with DSO lucky imaging. I used my 8“ f/15.5 MCT (sometimes with reducer at f/10) and a CMOS ASI290MM. Lights were taken with 5sec exposure. Sadly, the seeing conditions on my observing site was extremely poor. For that conditions, the results are ok. Quality increase notably with objects altitude. The most limiting factor for short exposure time is the objects photon flux itself. So, with small telescopes like our amateur telesopes, the detected photon flux from DSO objects is very limited, even for the brightest. And even with the most sensitive cameras (I also made some experiments with Andor Ixon EMCCD cameras years ago), which are able to count single photons, you need a serious number of photons in each sub, which are statistical enough to give a geometric reference information for the stacking software to aligne the frames to each other. Also, while selecting only the best fwhm frames, the over all exposure time for a better resolution image compared to a image with the same over all exposure time, but long exposure time in the subs, are increasing with decreasing seeing conditions.
So, in conclusion, I don’t think, DSO lucky imaging improve image quality for amateur telescopes so significantly that the additional effort worth it. Algorythms like DDP, etc. are able to improve long exposured images for the same amount...
For professionals with big scopes it would be a opportunity for some bright objects. But in the most cases, big telescopes today are equiped with much more effective adaptive optics, so DSO lucky imaging is not a solution even for the professionals.
Attached some of my experiment results:
Just my five cents,
cheers Markus
Very in depth article which entertains food for thought.
Very interesting approach. I like the concepts of the virtual buffer and image autoguiding--no reason to tie up a lot of file space and acquisition time which will be aggravated by the stacking process, any way. Is this approach now used in any of publicly available programs like SharpCap or FireCapture? Or is your software available, or soon to be available to the community?
In response to forum member 'etalon' I would agree that the best (and proven) way to go for improving astronomical imaging is through adaptive optics (which happens to have been at pioneered at one of my previous places of employment), but the average amateur is unable to afford the 'rubber mirrors,' wavefront sensors and the artificial guide stars necessary for that technology. Instead, taking lots of short exposures and sorting through them is more in the realm of what us amateurs can do.
The software I use is not in any programs available to the public. I have written the software on a Macintosh and is experimental. I am currently working on a new version that makes use of an image intensifier tube to get down to the photon counting regime. The current set of CMOS cameras have much lower readout noise than past CCD cameras, but their readout noise is not low enough for photon counting. What I would like to do is use short enough exposures to freeze the turbulence. Whether this will work with low radiance DSOs is an open question. I have made a number of algorithmic improvements, but whether it's enough I don't really know. Yes, larger telescopes would boost the SNR and would make short exposures more feasible, however with larger telescopes getting lucky with nearly diffraction limited images becomes more difficult. So there is a trade-off here which will depend on how good the seeing is at the observing site.
Bob
Nice paper, very interesting work. It would be nice to see these techniques implemented in current popular software.
Do you have imaging results that compare your lucky imaging technique with long exposures under similar conditions? It would be nice to see a side by side comparison of results.
I use a similar technique for a different reason: I leveraged my 12” observational Alt/Az Dob Goto Dob, so I also use very short exposures of 6-8 seconds in dim objects, 2-4 seconds in the brighter objects typically. For me it’s mainly due to tracking jitters, drift, and sometimes field rotation, but I also see the atmospheric seeing benefits at the shorter exposures, especially on globular clusters, bright galaxies, and planetary nebulae. I would love to have longer “virtual subs” generated to keep the data at manageable levels, that would be a huge enabler for me.
I agree with the comments that it’s really only for a subset of targets and situations so, but if implemented in a way that makes it easy to do then why not get higher resolution on some of our targets? I would love that.
As I stare at your fine image of M13 it slowly seems to begin to rotate on me. Did anyone else notice a similar effect or am I simply down to my last marble?