Lately I have been playing around a bit with speckle interferometry techniques. I'm certainly no expert in speckle interferometry - it is tricky subject - but I'll try to explain it to the best of my knowledge.
----- an edited copy/paste from a post I made elsewhere ----
Speckle interferometry is a technique that automatically creates near diffraction-limited results using a sequence of images made through a telescope. It can be applied on images that were taken in so called 'speckle bursts', where each image provides an independent representation of the seeing state at that time. This means the images must have been made with short enough exposure times to freeze the blurring, or better speckling, effects caused by the seeing ( http://en.wikipedia....Speckle_imaging. We are talking about speckle interferometry, not shift and add. Shift and add techniques are more like lucky imaging, but there are many different ways to determine the 'shift').
Subsequent images are best to have a small 'time gap' in between them to ensure the seeing is making the images look different from each other, usually this is in the order of 10ms (more is no problem, less might give non-optimal results). Of course you must make sure that the object itself isn't changing so much, otherwise there is no static image to reconstruct in the first place. The image also has to have a pretty high signal to noise ratio to begin with, and an extra limitation is that you have to use a relatively narrow band-pass filter (+- 10% of the wavelength or less). Typical exposure times that are needed are at least 10ms, the gain levels should be very low.
As you can imagine, not many imaging targets are suitable for speckle interferometry, but one obvious one comes to mind. The Sun.
-- another copy/paste---
With speckle imaging you estimates the actual distortions on the images (that is why it NEEDS high quality images), and you use those estimates to reverse the effect and end up with a near diffraction limited result. Automatically. No sharpening is involed. About 50-200 frames is basically enough, addding more doesn't add much more. The technique is not perfect: poor seeing will not give diffraction limited results.
In contrast, Lucky imaging just works with the best parts of the recordings, and combines those in a smart way. You will end up with a very soft image that NEEDS further processing to reduce remaining seeing and aligning/stacking blurring. If you do this correctly, then under favourable conditions you can also get near-diffracton limited results. The more frames you have the better your results. The technique is not perfect: poor seeing will not give diffraction limited results.
What I'm figuring out now is what the limits of both techniques are (well, I have a fairly good understanding of AutoStakkert!2, but I'm new with speckle imaging). The end goal is to come up with a more or less automated processing system (Linux software). Then it would make a lot of sense to get reliable 'finished' results without the hassle of lucky-imaging that requires a lot of effort with image sharpening. But this goals is still far away.
For now just a small demonstration of what the technique can do when I feed it a recording of 16 seconds that I split up in speckle-bursts of 50 frames each. So each frame in the animation is an independent reconstruction of only 0.5 seconds worth of data.
There is no sharpening applied to the images, this is 100% what comes out of the technique. I did not even change the brightness or gamma, or perform denoising. To the best of my knowledge of the speckle interferometry code, these images are what lie underneath the seeing-distorted images you can see attached to this post.
Notice there is still a lot of wobbling going on. This is because there are slower bigger seeing waves distorting the images, and a 0.5 second recordings is just to short for this. I have a couple of ideas on how to fix this, but that will take some time.
I will hopefuly be allowed to release binaries of the speckle interferometry code (KISIP), so others can play around with it. And I also plan to provide a small manual (or bunch of notes) on how to use it, but don't expect there to be a full working end-result anytime soon (and definately don't expect it to be easy to use ). This will take time, and I'm always short on that.
NOTE: the image below is a single frame - during what I call good seeing (95% of the time the seeing is worse where I image) - from the recording. This is NOT the reconstructed image. Those are seen in the animation above.