Jump to content


Photo

Speckle interferometry image reconstruction

  • Please log in to reply
23 replies to this topic

#1 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 14 February 2013 - 08:34 PM

edit: changed 'speckle reconstruction' to 'speckle interferometry' to perhaps clear things up a bit (or at least make it easier to google the techniques).

Hi All,

Lately I have been playing around a bit with speckle interferometry techniques. I'm certainly no expert in speckle interferometry - it is tricky subject - but I'll try to explain it to the best of my knowledge.


----- an edited copy/paste from a post I made elsewhere ----

Speckle interferometry is a technique that automatically creates near diffraction-limited results using a sequence of images made through a telescope. It can be applied on images that were taken in so called 'speckle bursts', where each image provides an independent representation of the seeing state at that time. This means the images must have been made with short enough exposure times to freeze the blurring, or better speckling, effects caused by the seeing ( http://en.wikipedia....Speckle_imaging. We are talking about speckle interferometry, not shift and add. Shift and add techniques are more like lucky imaging, but there are many different ways to determine the 'shift').

Subsequent images are best to have a small 'time gap' in between them to ensure the seeing is making the images look different from each other, usually this is in the order of 10ms (more is no problem, less might give non-optimal results). Of course you must make sure that the object itself isn't changing so much, otherwise there is no static image to reconstruct in the first place. The image also has to have a pretty high signal to noise ratio to begin with, and an extra limitation is that you have to use a relatively narrow band-pass filter (+- 10% of the wavelength or less). Typical exposure times that are needed are at least 10ms, the gain levels should be very low.

As you can imagine, not many imaging targets are suitable for speckle interferometry, but one obvious one comes to mind. The Sun.

-- another copy/paste---

With speckle imaging you estimates the actual distortions on the images (that is why it NEEDS high quality images), and you use those estimates to reverse the effect and end up with a near diffraction limited result. Automatically. No sharpening is involed. About 50-200 frames is basically enough, addding more doesn't add much more. The technique is not perfect: poor seeing will not give diffraction limited results.

In contrast, Lucky imaging just works with the best parts of the recordings, and combines those in a smart way. You will end up with a very soft image that NEEDS further processing to reduce remaining seeing and aligning/stacking blurring. If you do this correctly, then under favourable conditions you can also get near-diffracton limited results. The more frames you have the better your results. The technique is not perfect: poor seeing will not give diffraction limited results.

---

What I'm figuring out now is what the limits of both techniques are (well, I have a fairly good understanding of AutoStakkert!2, but I'm new with speckle imaging). The end goal is to come up with a more or less automated processing system (Linux software). Then it would make a lot of sense to get reliable 'finished' results without the hassle of lucky-imaging that requires a lot of effort with image sharpening. But this goals is still far away.

For now just a small demonstration of what the technique can do when I feed it a recording of 16 seconds that I split up in speckle-bursts of 50 frames each. So each frame in the animation is an independent reconstruction of only 0.5 seconds worth of data.

http://www.astrokraa...peckle_demo.gif

There is no sharpening applied to the images, this is 100% what comes out of the technique. I did not even change the brightness or gamma, or perform denoising. To the best of my knowledge of the speckle interferometry code, these images are what lie underneath the seeing-distorted images you can see attached to this post.

Notice there is still a lot of wobbling going on. This is because there are slower bigger seeing waves distorting the images, and a 0.5 second recordings is just to short for this. I have a couple of ideas on how to fix this, but that will take some time.

I will hopefuly be allowed to release binaries of the speckle interferometry code (KISIP), so others can play around with it. And I also plan to provide a small manual (or bunch of notes) on how to use it, but don't expect there to be a full working end-result anytime soon (and definately don't expect it to be easy to use ;) ). This will take time, and I'm always short on that.

NOTE: the image below is a single frame - during what I call good seeing (95% of the time the seeing is worse where I image) - from the recording. This is NOT the reconstructed image. Those are seen in the animation above.

Attached Files



#2 Kokatha man

Kokatha man

    Fly Me to the Moon

  • *****
  • Posts: 6775
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 14 February 2013 - 09:32 PM

.....certainly plenty of food for thought Emil, totally different in all aspects but reminding me of the conversations I've had with Jason re tone-mapping of images to enhance detail/resolution of planetary images: have only "skimmed" you post and the links and see that the low gain needs to be coupled to exposures around 10mS for the visible spectrum but up to 100mS for infra-red wavelengths.

From what I'm reading "Lucky imaging" is actually a form of "speckle imaging" and somewhat different to what we normally apply through AS!2 etc processing....? :question:

Thus I am wondering whether we might simply be able to "extract" single images from a sequence at the start of AS!2's quality grading to create this "sequence of images" to apply the speckle imaging technique to our (normally processed) stacks to enhance detail somewhat akin to luminance layering....? :question:

The "small time gap" you speak of could possibly be provided by skipping frames in this "best of" sequence at the start of the AutoStakkert graded stack..?

And of course, if all this isn't simply hair-brained on my part :question: :lol: one would need to be able to extract aforementioned frames and apply whatever processing is required - but this is where my impression of "lucky imaging" seems to suggest this is merely a form of what is done already in standard processing.....as opposed to other forms of speckle image processing....? :confused:

Again - if I haven't lost the plot - isn't this what creating a specific number of the best frames in AS!2 would do anyway - but not "staggering" the frames in any way and forgetting about any mild sharpening AS!2 does with these small stacks that folks can use to determine capture qualities???

Would appreciate your response to this.....might using the red channel's (or iR!) staggered "best sequence" as a luminance overlay have some role herein.....or running a seperate avi etc of the red channel at a lower gain/framerate to be used in conjunction as a luminance....?

Please forgive any prosaic inverting of your information and the thrust of it - but I'm just a "nuts & bolts" man at heart! :grin:

#3 TorstenEdelmann

TorstenEdelmann

    Messenger

  • -----
  • Posts: 452
  • Joined: 29 Sep 2004
  • Loc: Landsberg, Germany

Posted 15 February 2013 - 05:13 AM

Emil,
without doubt absolutely outstanding stuff!

How many patches did you use for obtaining the speckle transfer function ?
How long does it take to reconstruct a frame out of that 50 bursts ?

Torsten

#4 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 15 February 2013 - 05:18 AM

Sorry Darryl, I have only skimmed your reply ;) (I couldn't help myself..)

Yes, lucky imaging is a form of speckle imaging. It is of the form shift and add: wait for good exposures, and align and stack those. Amateur astronomers always perform image sharpening on high resolution images, because that clearly helps a lot, but I don't see that mentioned on wikipedia.

I should have called speckle reconstruction speckle interferometry, because that is really what it is called. Perhaps this clear things up a bit.

>Thus I am wondering whether we might simply be able to "extract" single images from a sequence at the start of AS!2's quality grading to create this "sequence of images" to apply the speckle imaging technique to our (normally processed) stacks to enhance detail somewhat akin to luminance layering....?
Eh.. you lost me here. I'll try to guess what you mean. AFAIK the order of the frames is important for speckle interferometry, so just selecting the best frames is not a good idea. You could select the best 'period' from your avi though, to get optimal results. That is perfectly fine.

>The "small time gap" you speak of could possibly be provided by skipping frames in this "best of" sequence at the start of the AutoStakkert graded stack..?
So first: forget the "best of" sequence, we extract a good section from the entire video (or use it all). But yes, skipping could just be performed in AS!2. Perhaps no skipping is needed: if your camera operates at 50 frames per second, and you use shutter times of 1/100s, there are already 1/100s gaps in between the recordings. I myself have worked with 1/200s recording taken 1/100s apart, and that seems to work.

>And of course, if all this isn't simply hair-brained on my part one would need to be able to extract aforementioned frames and apply whatever processing is required

Apart from calibration (meaning flats + darks, or if the camera is clean enough, you can even get away with both of these. In the example I didn't use calibration at all), NO processing is required. It is just a sequence of images that go into the program, and a single diffraction limited result comes out. That is as sharp as it can be. So it would be the finished luminance layer. Of course you could play around with the recording, but in principle there is no more retrieving hidden details by wavelets/convolution/whatever, whilst this an absolutely CRITICAL step for AutoStakkert!2 stacks.

> ... staggering ...
What is that? But no, AutoStakkert!2 is a sophisticated tool that performs lucky imaging, not speckle interferometry.

>Please forgive..
Done!

So I'm basically testing the usability and limits of both techniques.

#5 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 15 February 2013 - 05:39 AM

Hi Torsten,

I think it was about 50, but I'm not 100% sure to be honest, it is not a parameter I can directly set and I haven't really paid that much attention. I know I made the patches larger and smaller, but this seemed to give the best results.

Processing one of these 50 frame bursts took about 45 seconds (which is not that bad actually), but it depends a lot on the parameters you use.

#6 DesertRat

DesertRat

    Fly Me to the Moon

  • *****
  • Posts: 5173
  • Joined: 18 Jun 2006
  • Loc: Valley of the Sun

Posted 15 February 2013 - 12:51 PM

Will look forward to seeing you pursue the technology as it applies to solar system imaging.

I'm interested in this as well as other approaches to lucky imaging.

There are many registration techniques applied in fields like medical imaging as well as the movie industry. An interesting effort (which you in all probability know of as its from your home country) can be found here: http://elastix.isi.uu.nl/about.php

Another interesting technique for stellar images using blind deconvolution is described here:
http://pixel.kyb.tuebingen.mpg.de/obd/

Thanks for posting your investigations Emil!

Glenn

#7 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 15 February 2013 - 02:45 PM

Hi Glenn,

There are many things from my home country I'm not aware of, and this was one of them ;)

I have never searched anywhere but in the field of astronomy for these kinds of techniques, and much of what is implemented in AutoStakkert!2 for example is from my own experience. That did cause me to re-invent the wheel in some occasions, but at least I now know how to make some wheels.

But these are definately some interesting techniques. I was brainstorming about what elastix does (I think). I basically have all the functionality in AutoStakkert!2 to produce a model of the motions in the frames. I could then stabilize the entire field of view EACH frame using an interpolation technique. Perhaps on these images it is possible to apply speckle-reconstruction a bit better.

I think that in some way speckle interferometry can be seen as a form of blind deconvolution. I will definately play around with the matlab code as well.

Have you use any of these tools by the way? And what were your results?

#8 tjensen

tjensen

    Mercury-Atlas

  • -----
  • Posts: 2609
  • Joined: 16 Feb 2005
  • Loc: Chapel Hill, NC

Posted 15 February 2013 - 03:13 PM

Amazing result Emil. So is your goal to implement this into AS? Or derive a new software processing package?

Cheers

#9 DesertRat

DesertRat

    Fly Me to the Moon

  • *****
  • Posts: 5173
  • Joined: 18 Jun 2006
  • Loc: Valley of the Sun

Posted 15 February 2013 - 03:45 PM

I have not used elastix or the toolkit ITK. Have used blind deconvolution. Have experimented in speckle imaging of single and multiple stars, developed code several years ago, dropped it as the new lucky imaging stackers were equaling or surpassing my results. The math was fairly complex and the papers not as helpful as I would have liked.

My present 'developer' interest is in microscopic imaging, mostly coherent, and wavefront propagation and reconstruction. I use Octave (MatLab clone) quite a bit as well as the programming language called 'R'.

Its great to see somone like yourself pursuing your own and others ideas. One can only hope that we can enjoy the benefits even if it takes a year or two to develop!

Glenn

#10 LauraMS

LauraMS

    Lift Off

  • -----
  • Posts: 19
  • Joined: 29 Mar 2011
  • Loc: Germany

Posted 15 February 2013 - 05:42 PM

This is very interesting -you are using code from the Oskar von der Lühe group in Freiburg? I used read some of their papers because they are probably the world leading group in solar adaptive optics, and speckle imaging is an additional step in that process.

Emil, may I ask at what wavelength and bandwidth you have acquired your solar data? And I guess you have used a ND 3.8 solar filter to obtain sufficiently short exposure time?

#11 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 15 February 2013 - 06:41 PM

Tim, no. I don't think so. It would take me ages to port the code. And it isn't my code to begin with, so I'm probably not even allowed to use it like that. To be honest, I haven't looked past the main function, while I was debugging why it wasn't working for me. I will hopefuly release some binaries along with instructions on how to use it in the form of a wiki page, but that will take a little bit of time.

Glenn, my finding so far has also been that the lucky imaging tools are actually really good. But this is definately an interesting tool that approaches things entirely different. It is a lot of fun to get good results like these without too much trouble. We are used to post-processing, if we see an image that is slightly blurry (a stack) without thinking we push it to the limits (and sometimes beyond) to extract as many details as possible. Aesthetics is also important there, some prefer softer more natural looking images, others prefer enormous contrasts to make it easier to see details. In the end there often isn't really THAT much difference between the details in the image.

This techniques just gives you near-diffraction limited results. According to a recording in this wavelength of light, and that aperture diameter, the finest details are reconstructed. It doesn't work perfectly, the best results are obtained if you use it in combination with AO systems, but even without AO it appears to work pretty good at least when the seeing is reasonable.

Laura, yes, I'm using the KISIP software (I contacted Dr. Friedrich Wöger). The wavelength was about 585nm. The data was recorded at about 100/120 fps using a basler ace aca640-100gm camera 1.5 years ago. Shutter times were somewhere around 1/200-1/250s. I indeed used the ND 3.8 solar filter (a piece of foil). These recordings were actually taken in good seeing conditions, I still have to see what it can do in poor conditions, but the results will probably be (much) worse. Otherwise there would be no need for AO or good telescope locations ;)

#12 LauraMS

LauraMS

    Lift Off

  • -----
  • Posts: 19
  • Joined: 29 Mar 2011
  • Loc: Germany

Posted 15 February 2013 - 07:18 PM

Thanks for the data, I think I need to try as well... (so many interesting things to try with my new camera).

I've never completely understood the role of speckle interferometry in AO. I guess it becomes clearer with larger apertures, where AO can restore most of the wavefront error introduced by the blurry atmosphere, and SI restores the remaining errors in the AO-reconstructed image.

Emil, may I ask if you have an idea of how small the smallest structures in your reconstructed image are, and how this number compares with the theoretical resolution of your scope at the wavelength used? It would be interesting to compare these values.

Ignoring the numbers, these are really impressive images, congratulations! And what I also like about the methodology is the fact that there appears to be less subjective influence of the observers taste as it is obviously present in post processing of data acquired by lucky imaging. Well, at least from a scientific point of view this may be important. It is of course of less importance if one does the imaging just for fun... (as probably everyone here is doing).

#13 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 15 February 2013 - 07:56 PM

You should be able to calculate it using:

image scale of about 0.32" per pixel
imaged in 585nm light
with 254mm of aperture

But it is difficult to measure, especially since this recording was a bit undersampled to begin with. If I had to guess, than occasionally in this animation areas are in fact diffraction limited. This is mainly a guess from my experience with these kind of recordings. It's not easy, but planetary astrophotographers can get extremely close to diffraction limited results. This is perhaps best seen if you compare an infrared light result to a green light result under good seeing conditions: you can then see that the smallest features in infrared light are in fact much fatter than those in green light, even though the infrared light (>700nm) is (much) less affected by the seeing. Actual measurements are a bit tricky I guess, I think you would need a lot of measurements to get a reliable results, and in this case you would probably be measuring a length of 1 or perhaps 2 pixels. From 1 to 2 is a huge difference.

>And what I also like about the methodology is the fact that there appears to be less subjective influence of the observers taste..
Exactly. It is pretty cool that you can get reliable results of what the image should look like, without the user having to spent a lot of time tweaking and perfecting what it thinks an image stack should look like. But in this sense it would perhaps also be possible to apply a similar 'standardized' processing (blind deconvolution) on stacked images, as long as we know the essential parameters of the system. It just has never been done before (although some attempts have occasionally been made. But I guess it is also not easy).

#14 HPaleske

HPaleske

    Viking 1

  • -----
  • Posts: 794
  • Joined: 09 Apr 2007

Posted 16 February 2013 - 01:42 AM

Sorry, I deleted the idea.

cs Harald
www.unigraph.de

#15 GreatGigInTheSky

GreatGigInTheSky

    Ranger 4

  • -----
  • Posts: 303
  • Joined: 06 Feb 2011
  • Loc: Santa Clara, California

Posted 16 February 2013 - 03:00 AM

Hi Emil,
You might be interested in this link. This is a technique call "super resolution" that can yield sub-pixel details from a number of undersampled images -- the caveat being that it will not yield higher resolution results for critically sampled or oversampled data.

There are a number of other interesting references to this technique out there, just search for them if you're interested, but the link above has dowloadable Matlab code "...provided for non-commercial research purposes only." I think that describes us. :grin:

#16 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 16 February 2013 - 08:00 AM

Hi Jeff.

I recently wrote a small post on super-resolution that might be an interesting read: http://www.autostakk...com/wp/enhance/

#17 GreatGigInTheSky

GreatGigInTheSky

    Ranger 4

  • -----
  • Posts: 303
  • Joined: 06 Feb 2011
  • Loc: Santa Clara, California

Posted 16 February 2013 - 12:32 PM

Hi Emil,
Looks like you're completely on top of this one already. I was wondering to what extent AS!2 might already take advantage of this, and now I know. What's interesting to me is that even when I'm oversampled (using a 3x barlow for ~f/30, as has been my habit lately. With 5.6 micron pixels, I'm critically sampled at ~f/17.4.) AS!2 gives me better, more detailed results when drizzled 1.5x. I've never been able to get anything out of 3x drizzle, however.

#18 GreatGigInTheSky

GreatGigInTheSky

    Ranger 4

  • -----
  • Posts: 303
  • Joined: 06 Feb 2011
  • Loc: Santa Clara, California

Posted 16 February 2013 - 12:51 PM

I should have added "in red light at 630nm" I'm critically sampled. Actually, the point in the spectrum makes a big difference. At 400nm, critical sampling isn't achieved until ~f/27.5. In either case, I'm past that with the 3x barlow, but it's a lot closer on the blue end of the spectrum.

Thanks again for such a great piece of software.

#19 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 16 February 2013 - 01:43 PM

Jeff, the 1.5X drizzling does not really exist in AS!2.

Drizzling is always done at 300%. The 1.5x result is simply the 3x stack resampled back 50% using a bicubic resizing method. You could do the same in Photoshop, but there are many variants of bicubic and the closest one in Photoshop is probably bicubic smoother.

The reason I added the 1.5x setting is because 1) drizzling often gives grid like artifacts in sharpened stacks (due to various reasons), and these disappear if you use a smooth(!) bicubic resampling method, and 2) because a 3x drizling is usually too much anyway. There are other methods to get rid of the grid like artifacts, and sometimes you might just prefer the larger image or process this image in another way (!), so if you want to work with the real drizzle output you can by selecting 3x drizzling.

It is nearly impossible to accurately compare different image scales, as your processing CAN NOT be the same between the scales for a fair comparison. And simply resizing is also not an option, because the resizing method always interpolates. Using bicubic sharper or bicubic smoother or bicubic neutral all affect the image in a different way.

#20 RedLionNJ

RedLionNJ

    Viking 1

  • -----
  • Posts: 893
  • Joined: 29 Dec 2009
  • Loc: Red Lion, NJ, USA

Posted 16 February 2013 - 08:23 PM

Oh man, Emil - that is just wonderful. And while I'm not convinced (although I may be lost in the 'resolution math' somewhere) it's really going to bring out even more detail, you could take that GIF, run it through VirtualDub to turn it into a mono AVI, then use AS!2 to employ the "lucky imaging" techniques on that AVI, ending up with a single 1.5x drizzled image that is extremely pleasing to the eye. You are GOOD...

Totally envious,

Grant

#21 GreatGigInTheSky

GreatGigInTheSky

    Ranger 4

  • -----
  • Posts: 303
  • Joined: 06 Feb 2011
  • Loc: Santa Clara, California

Posted 17 February 2013 - 01:24 PM

Hi Emil,
Interesting about the drizzling. It's not anything about AS!2 that causes me difficulties at 3x drizzle -- it's just that either R6 and/or my own ineptitude, in spite of multiple attempts at it, have made it impossible for me to come up with a workable sharpening routine for 3x images. For 1.5x, I've got it dialed in -- I always use the same saved wavelets, then just make minor tweaks to those settings for the specific data. And in line with what you've written above, these wavelet settings are completely different from what I would use at native image scale, so I wouldn't expect either of these schemes to work at 3x, but rather worked from scratch to get something that produced a reasonable result at 3x, but to no avail -- I've always been far more happy with my 1.5x results that anything else I've been able to do.

I'm curious, though, about your choice of going all the way to 3x for drizzling. Why did you not pick 2x?

#22 MvZ

MvZ

    Surveyor 1

  • *****
  • Posts: 1577
  • Joined: 03 Apr 2007
  • Loc: The Netherlands

Posted 17 February 2013 - 02:25 PM

No particular reason I guess, other than that it is nicely symetrical. I prefer the 3x and 5x options, they are easy to implement.

With drizzling you also have to define how large the original pixel gets when it 'rains' onto the pixels of the finer grid. With a grid only twice as large, that process cannot be done as accurately. On the other hand, a drizzling size of 5x requires A LOT of memory, so 3x seemed to be a good solution that works in most of the cases.

#23 DesertRat

DesertRat

    Fly Me to the Moon

  • *****
  • Posts: 5173
  • Joined: 18 Jun 2006
  • Loc: Valley of the Sun

Posted 17 February 2013 - 04:24 PM

Jeff,

When the image is resampled as much as the 3X drizzle provides, I recommend you look at setting the 'Initial Layer' on the R6 wavelet page to 2. Then use similar (though a little different) slider settings to sharpen.

I have not used 3X drizzle much at all but I often resample raw tiffs up anywhere from 2.5X to 4X using Bicubic B-Spline for example in PixInsight. The Initial Layer 2 setting in R6 works for me in that case. You may find for some images that you will have to lower the sharpen box from the default 0.100 down to as low as anywhere from 0.040 to 0.080.

Glenn

#24 DesertRat

DesertRat

    Fly Me to the Moon

  • *****
  • Posts: 5173
  • Joined: 18 Jun 2006
  • Loc: Valley of the Sun

Posted 17 February 2013 - 04:43 PM

Emil,

After reviewing what you have demonstrated I have to say I'm amazed at your solar speckle results. That comes pretty close to magic. I know it is'nt but the first view of your results were startling.

Concerning super-resolution it seems we have been doing this for some time without realizing it in the case of somewhat undersampled setups. What comprises an undersampled setting has been a subject of debate however.

It also occurs to me that one could argue the following: In seeing conditions where small details are visible but they are moving a fair amount between frames that a case could be made that it might be better to lower the efl and the exposure time. After all these years of lucky imaging and endless debates about nyquist sampling and optimum exposure settings I find myself questioning some of these points once again!

Thanks for waking up some of our collective brain cells!

Glenn






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics