Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Results from my deconvolution experiments...

  • Please log in to reply
12 replies to this topic

#1 sbharrat

sbharrat

    Vanguard

  • *****
  • topic starter
  • Posts: 2,084
  • Joined: 28 Nov 2020
  • Loc: NJ, USA

Posted 10 February 2024 - 01:18 PM

The imaging conditions here have been maddening so in the downtime I started work on an experiment that I first wrote about and asked for help on the experieced forum (https://www.cloudyni...-an-experiment/). Jon Rista weighed in and pointed out a significant flaw in the experiment that negates a bit of the original intent. Nevertheless, it was useful, at least to me, and I thought I would share the outcome. 

 

Here the general premise. With all the discussions about deconvolution and real deconvolution and what BXT was doing, etc, etc, I realized I had no way of really evaluating my deconvolution attempts. After all, I don't literally have the ground truth as it could be from where I took the images. The basic idea was this:

- start with a very good ground image, one with quite low FWHM. Call this A

- convolve this with a PSF to form a convolved image B

- attempt deconvolution on this image B to form C

- compare C against A

 

Before I go on, here is the flaw that Jon pointed out. The convolution in the A -> B step messes up the noise profile. This will prevent a proper deconvolution since that is supposed to be done at the very start, in particular before noise reduction. Effectively, this becomes an experiment to evaluate deconvolution on somethink like a noise-reduced image. 

 

Here are the details about the starting image (ngc5128_L):

- Purchased from the photographer Matt Dieterich
- Taken with PW CDK24 and FLI Proline 16803 in Chile (https://www.mattdiet...h.com/centaurus)
- This is luminance only, 80m aggregate exposure
- Image scale is 0.924"/px (according to Astrometry.net)
- Worst FWHM region according to PI script is 1.89"

 

I will show my results in the next post, but you can try your hand at this as well. The folder https://drive.google...?usp=drive_link has all of the images I used in this all with a README.txt file.

 

If you do spend some time, would love to see your attempt at it. Post your deconvolution result in the same directory. 

 

As a starting point, here is the blurring that was induced by the convolution with my PSF. (You might need to click on image for animation)

ngc5128_conv_vs_ref.gif

 

Full resolution: https://drive.google...?usp=drive_link

 


  • Jim Waters, ntph, dx_ron and 1 other like this

#2 sbharrat

sbharrat

    Vanguard

  • *****
  • topic starter
  • Posts: 2,084
  • Joined: 28 Nov 2020
  • Loc: NJ, USA

Posted 10 February 2024 - 01:24 PM

This is my best attempt at deconvolution on the convolved image. You can see all the relevant files in the directory mentioned in the original post. 

 

ngc5128_L_conv_RL:
- I lowered global dark until I could just about see rings around stars with 10 iterations and no divergences during the iterations
- raised global dark a bit and then increased iterations until rings just showed again
- tried both with and without local support of a star mask. No appreciable difference

 

Definitely some of the original detail is recovered. The stars are reduced but not back to their original size. And though there is improvement in detail, it is nowhere near the original. This was a relatively small number of iterations but I couldn't use more without getting noticeable ringing. I don't know if that is just because I am just inexperienced at this or because the image is already smoothed

 

Here is the comparison of my final result versus the original reference. Some detail left "on the table" frown.gif

 

ngc5128_rl_vs_ref.gif

 

full res: https://drive.google...?usp=drive_link

 


  • Jim Waters likes this

#3 sbharrat

sbharrat

    Vanguard

  • *****
  • topic starter
  • Posts: 2,084
  • Joined: 28 Nov 2020
  • Loc: NJ, USA

Posted 10 February 2024 - 01:32 PM

This is BXT with model v4 with both stellar and non-stellar setting of 0.25. The same setting for both (putatively) gives the sharpening that is closest to "real deconvolution." (Hopefully this doesn't yet another long thread.). The 0.25 factor chosen to roughly match the star size - this should give the factor that gets it closest to the original image (versus sharpening below that).

 

Visually at least, it is very good. I don't quite know how to evaluate this from a quantitative perspective (willing to follow up on ideas from others) but it recovers almost all of the detail. The fits files are in the directory so you can take a look for yourself. Here is a gif of the comparison.

 

ngc5128_bxt25_vs_ref.gif

 

full resolution: https://drive.google...?usp=drive_link

 

 



#4 dx_ron

dx_ron

    Mercury-Atlas

  • *****
  • Posts: 2,896
  • Joined: 10 Sep 2020
  • Loc: SW Ohio

Posted 10 February 2024 - 03:50 PM

Hi Shaun. Very interesting experiment.

 

Here is what I got out of StarTools' "Spatially Variable Deconvolution" with 15 iterations. I set 14 star samples across the field for PSF estimation. From start to finish, it took just about 1 minute. (for the deconvolution, including setting the samples - it took a couple of minutes to get the basic stretch)

 

ngc5128_L_conv_ST_15x-cropped.jpg


Edited by dx_ron, 10 February 2024 - 03:51 PM.

  • Mike in Rancho and sbharrat like this

#5 Mike in Rancho

Mike in Rancho

    Mercury-Atlas

  • *****
  • Posts: 2,855
  • Joined: 15 Oct 2020
  • Loc: Alta Loma, CA

Posted 10 February 2024 - 04:51 PM

Well, I like the idea and the effort!  waytogo.gif

 

But I'm not sure there aren't more issues here than just a noise profile.

 

What do we know about the ground truth that is being worked with here?  Was the base pro/semi-pro data a raw stack, or finished image?  If the former, is it known what was done in stacking?  The dimensions are way bigger than that of a 16803, if I'm not mistaken.  If the latter, do we know if some type of deconvolution was already run on it?

 

Beyond that, are you saying that you took this base image (or data), and intentionally blurred it?  You mention you chose a PSF for convolution, so I am guessing a global operation of some kind.  The result would be scrambled PSF's, no?  The "natural" and spatially-variant PSF's across the field due to optics and atmosphere, then mashed up with the extra and artificial non-variant blur you imparted.

 

I suppose the experiment could be somewhat valid (caveat the noise issue you mentioned) for a modeled synthetic deconvolution, or a global sampled deconvolution.  Not sure how it would work with BXT sharpening, or with SVD, which sample for different star PSF's across the field.  Or so I guess...I don't really know what BXT is doing.

 

 

EDIT:  My bad, I thought this experiment was about M101, but obviously this is Cent A.  Maybe I should read the readme.  tongue2.gif


Edited by Mike in Rancho, 10 February 2024 - 04:54 PM.

  • sbharrat likes this

#6 sbharrat

sbharrat

    Vanguard

  • *****
  • topic starter
  • Posts: 2,084
  • Joined: 28 Nov 2020
  • Loc: NJ, USA

Posted 10 February 2024 - 06:19 PM

Well, I like the idea and the effort!  waytogo.gif

 

But I'm not sure there aren't more issues here than just a noise profile.

 

What do we know about the ground truth that is being worked with here?  Was the base pro/semi-pro data a raw stack, or finished image?  If the former, is it known what was done in stacking?  The dimensions are way bigger than that of a 16803, if I'm not mistaken.  If the latter, do we know if some type of deconvolution was already run on it?

 

Beyond that, are you saying that you took this base image (or data), and intentionally blurred it?  You mention you chose a PSF for convolution, so I am guessing a global operation of some kind.  The result would be scrambled PSF's, no?  The "natural" and spatially-variant PSF's across the field due to optics and atmosphere, then mashed up with the extra and artificial non-variant blur you imparted.

 

I suppose the experiment could be somewhat valid (caveat the noise issue you mentioned) for a modeled synthetic deconvolution, or a global sampled deconvolution.  Not sure how it would work with BXT sharpening, or with SVD, which sample for different star PSF's across the field.  Or so I guess...I don't really know what BXT is doing.

 

 

EDIT:  My bad, I thought this experiment was about M101, but obviously this is Cent A.  Maybe I should read the readme.  tongue2.gif

Was the base pro/semi-pro data a raw stack, or finished image?  If the former, is it known what was done in stacking?

 

Details in the initial post. PW CDK24 and FLI Proline 16803 taken on mountain in Chile! I hope that if I am able to do this, someone will consider me at least a "semi-pro"! 

 

This is just a calibrated luminance stack. https://www.mattdiet....com/centaurus 

Calibrated Luminance Master frame* 16 x 300s (FIT)

 

Beyond that, are you saying that you took this base image (or data), and intentionally blurred it?

Yes

 

The result would be scrambled PSF's, no?

If by "scrambled" you mean composition of convolutions, then yes. Whether a composition of convolutions is equivalent to some other single convolution, I do not know whether that holds in general. I have to imagine that the blurring in our images is due to multiple factors along the way to the sensor, each in effect convolving the input. We certainly don't try to tease apart those: we instead just use the final PSF to deconvolve. So why is this different? 

 

Not sure how it would work with BXT sharpening, or with SVD, which sample for different star PSF's across the field.

If it is a composition of convolutions (one space variant across the sensor space) and one invariant, then the resulting overall PSF will be space variant. BXT doesn't care how it got to be that - it just tiles the image and computes the (final) PSF based on the stars in each tile. It really shouldn't matter how it ended up there. 


  • Mike in Rancho likes this

#7 sbharrat

sbharrat

    Vanguard

  • *****
  • topic starter
  • Posts: 2,084
  • Joined: 28 Nov 2020
  • Loc: NJ, USA

Posted 10 February 2024 - 06:21 PM

Hi Shaun. Very interesting experiment.

 

Here is what I got out of StarTools' "Spatially Variable Deconvolution" with 15 iterations. I set 14 star samples across the field for PSF estimation. From start to finish, it took just about 1 minute. (for the deconvolution, including setting the samples - it took a couple of minutes to get the basic stretch)

 

attachicon.gif ngc5128_L_conv_ST_15x-cropped.jpg

This seems to have more "hairs" than the original. Can you upload the resulting fit to the folder? Of course those could be present in the original (from space) which was blurred away on its way to Chile. But no way of knowing without looking a space telescope version of it. 



#8 dx_ron

dx_ron

    Mercury-Atlas

  • *****
  • Posts: 2,896
  • Joined: 10 Sep 2020
  • Loc: SW Ohio

Posted 10 February 2024 - 06:52 PM

Sent a link via pm

 

I re-did it, though just the deconvolution part. Backed off to the default 10 iterations and added in a touch of de-ringing.

 

As for comparing vs the 'original' - the original is linear, so maybe it's possible the ST version's initial stretch captured a bit more of the structure? Note that in ST you can stretch first while still doing deconvolution on the linear data - a benefit of Ivo's "signal tracking" approach.



#9 Mike in Rancho

Mike in Rancho

    Mercury-Atlas

  • *****
  • Posts: 2,855
  • Joined: 15 Oct 2020
  • Loc: Alta Loma, CA

Posted 11 February 2024 - 05:47 PM


If by "scrambled" you mean composition of convolutions, then yes. Whether a composition of convolutions is equivalent to some other single convolution, I do not know whether that holds in general. I have to imagine that the blurring in our images is due to multiple factors along the way to the sensor, each in effect convolving the input. We certainly don't try to tease apart those: we instead just use the final PSF to deconvolve. So why is this different? 

 

Well, that sounds logical in theory!  I don't know if it holds up in practice though, since this is adding an extra blur after the fact that wasn't in conjunction with how the photons landed.  We'll get results, just a matter of determining if we can draw good conclusions from them.

 

I ran the data through various procedures and made a few panels.  One 500kb per post though, so will take a couple more.  I used ST to stretch and tried to match as well as I could, but since the data ends up a wee bit different, they aren't perfect.  For comparison I also first ran tools - including a plain stretch - on the original stack.  That same stretch was also applied to the files where you already ran PI deconvolution and BXT.  For a separate take on things, I used Siril's deconvolution.  First time using it, and I hardly knew what I was doing.  Just took things up to and maybe past the point of a little ringing.  The suggestion is to run a light denoise first to hold off ringing, but that's beyond my Siril skills.

 

For StarTools I used 1.9 beta.  And being a beta, I think there's still a few kinks to work out including the deringing support.  ST also only saves out in 3-channel 16-bit TIFF, unfortunately.  So even though I could revert the deconvolved file to linear if I wanted, I'm not sure the file would be useful in that state.

 

Here's the first set, cropped to the target detail.

 

gallery_345094_16138_435036.jpg


  • sbharrat likes this

#10 Mike in Rancho

Mike in Rancho

    Mercury-Atlas

  • *****
  • Posts: 2,855
  • Joined: 15 Oct 2020
  • Loc: Alta Loma, CA

Posted 11 February 2024 - 05:48 PM

The second panel shows your induced blur, stretched, and then a matching stretch of the attempt to reverse that with Siril, and then your PI and BXT.

 

gallery_345094_16138_244828.jpg


  • sbharrat likes this

#11 Mike in Rancho

Mike in Rancho

    Mercury-Atlas

  • *****
  • Posts: 2,855
  • Joined: 15 Oct 2020
  • Loc: Alta Loma, CA

Posted 11 February 2024 - 05:58 PM

The last panel is all ST, plus the original again for comparison.  As you can see I also tried a little sharpening by itself to see what that would do.

 

gallery_345094_16138_197131.jpg

 

 

 

When you ran BXT, was that just its deconvolution (or deconvolution-esque) tools, or did it also include the new rounded star "repair" thing?

 

I'd say of all the samples, the BXT has the cleanest appearing star field, but while plausible I'm not sure of the validity of that transform.  For the dust lane detail I'd lean towards one of the ST results as best recovery in comparison to the original.

 

A few interesting tidbits from the stars:  Some doubles and faint small stars were heavily impacted by the artificial added blur, and nothing could recover some of those.  One on double in the upper right, BXT actually did the best job of returning that, at least to a peanut if not a double.  However, for some tiny stars, especially if close to a larger star, those got turned into a small fuzzy blob by your blur.  PI, Siril, and StarTools all maintained them as small fuzzy blobs and could not return them to point stars.  BXT, however, just completely vanished them out of existence.  Gone!  Well, I guess if you can't get it right, make it go away like it was never there.  Nobody will know!  lol.gif

 

Edit:  Oh, I do have all of these saved in Gimp projects as giant stacks of layers, so one can blink between them using the view button.  Including the full FOV versions.  Can send those to you if interested.


Edited by Mike in Rancho, 11 February 2024 - 06:01 PM.


#12 Ivo Jager

Ivo Jager

    Vendor ( Star Tools )

  • *****
  • Vendors
  • Posts: 572
  • Joined: 19 Mar 2011
  • Loc: Melbourne, Australia

Posted 12 February 2024 - 08:19 PM

Was the base pro/semi-pro data a raw stack, or finished image?  If the former, is it known what was done in stacking?

 

Details in the initial post. PW CDK24 and FLI Proline 16803 taken on mountain in Chile! I hope that if I am able to do this, someone will consider me at least a "semi-pro"! 

 

This is just a calibrated luminance stack. https://www.mattdiet....com/centaurus 

Calibrated Luminance Master frame* 16 x 300s (FIT)

 

Beyond that, are you saying that you took this base image (or data), and intentionally blurred it?

Yes

 

The result would be scrambled PSF's, no?

If by "scrambled" you mean composition of convolutions, then yes. Whether a composition of convolutions is equivalent to some other single convolution, I do not know whether that holds in general. I have to imagine that the blurring in our images is due to multiple factors along the way to the sensor, each in effect convolving the input. We certainly don't try to tease apart those: we instead just use the final PSF to deconvolve. So why is this different? 

 

Not sure how it would work with BXT sharpening, or with SVD, which sample for different star PSF's across the field.

If it is a composition of convolutions (one space variant across the sensor space) and one invariant, then the resulting overall PSF will be space variant. BXT doesn't care how it got to be that - it just tiles the image and computes the (final) PSF based on the stars in each tile. It really shouldn't matter how it ended up there. 

This test may be a little problematic. Singularities (such as over-exposing stars) or non-linear responses of the sensor will also have been blurred. This will cause things around those areas to fall apart in a little in terms of being a "valid" test, because invalid data from those singularities has now been bleeding into the surrounding areas. The real object pre-sensor "value" is not blurred into the neighbouring pixels, but instead the max sensor well depth value is blurred into the neighbouring areas. Issues with ringing artifacts and star shapes around those areas should therefore be expected.

 

Likewise, stars that were over-exposing before may/will no longer be, yielding "plausible" (tapering off) stellar profiles that may - erroneously - serve as PSF samples (even though they should not be!). Indeed StarTools, for one, latches on to those "fake" stellar profiles as PSF sampling candidates for the user to click on.

 

Areas away from such - now blurred - singularities may yield decent results though, provided none of the "fake" stellar profiles made it into the PSF sample population.


  • sbharrat likes this

#13 sbharrat

sbharrat

    Vanguard

  • *****
  • topic starter
  • Posts: 2,084
  • Joined: 28 Nov 2020
  • Loc: NJ, USA

Posted 14 February 2024 - 03:44 PM

This test may be a little problematic. Singularities (such as over-exposing stars) or non-linear responses of the sensor will also have been blurred. This will cause things around those areas to fall apart in a little in terms of being a "valid" test, because invalid data from those singularities has now been bleeding into the surrounding areas. The real object pre-sensor "value" is not blurred into the neighbouring pixels, but instead the max sensor well depth value is blurred into the neighbouring areas. Issues with ringing artifacts and star shapes around those areas should therefore be expected.

 

Likewise, stars that were over-exposing before may/will no longer be, yielding "plausible" (tapering off) stellar profiles that may - erroneously - serve as PSF samples (even though they should not be!). Indeed StarTools, for one, latches on to those "fake" stellar profiles as PSF sampling candidates for the user to click on.

 

Areas away from such - now blurred - singularities may yield decent results though, provided none of the "fake" stellar profiles made it into the PSF sample population.

Thanks for the insight! Yes, I can see how this approach is problematic, now from multiple perspectives. I do have a question though. The real image from space T has some function F applied to it. I then applied C to it giving C(F(T)). Due to the singularity issues you mention, I understand I can never reverse backwards beyond F(T). But I should be able to get back to F(T), even if I can never get back to T, right? 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics