Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Combining washed out data with good data

  • Please log in to reply
14 replies to this topic

#1 elNabo

elNabo

    Lift Off

  • -----
  • topic starter
  • Posts: 3
  • Joined: 04 Nov 2018

Posted 16 September 2019 - 04:54 AM

Hi,

 

So i took some photos of the Pacman nebula with a 98% moon. The Ha came quite OK but the Oiii are whased out. (And then read about it and found out that on a full moon the recommended is to do either Ha or Si and never Oiii).

 

I am now going to take more Oiii under better skies I'm just not sure what to do with the washed out data? I have read that can be bad to use it and some others say that software like pixinsight will only take extra data if possible and never damage the final picture.

 

I guess my question on this topic applies more for the general workflow. If you ever have washed out that data would it be beneficial to combine it with better data? Is there any benefit/detail that might be on the washed out photos?

 

My camera is a ZWO 1600MM pro with the narrowband and LRGB filters from ZWO as well.


  • Swordfishy likes this

#2 pyrasanth

pyrasanth

    Surveyor 1

  • *****
  • Posts: 1851
  • Joined: 08 Jan 2016

Posted 16 September 2019 - 05:55 AM

My experience with washed out data has not been very productive. However you are presenting yourself with an opportunity to test various scenarios when you get good OIII data. I would take a few reference HA frames and then combine some good OIII data and then some washed out data & review the results. Let me know how you get on.


Edited by pyrasanth, 16 September 2019 - 06:04 AM.

  • Swordfishy likes this

#3 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3995
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 16 September 2019 - 07:00 AM

Technically the optimum way forward is to add all the data but with appropriate weightings so that the data with higher background, and correspondingly higher noise, is weighted less in the stack. I believe PI can do this.


  • 2ghouls likes this

#4 elNabo

elNabo

    Lift Off

  • -----
  • topic starter
  • Posts: 3
  • Joined: 04 Nov 2018

Posted 17 September 2019 - 03:37 AM

My experience with washed out data has not been very productive. However you are presenting yourself with an opportunity to test various scenarios when you get good OIII data. I would take a few reference HA frames and then combine some good OIII data and then some washed out data & review the results. Let me know how you get on.

You are right. There is the opportunity of testing how it come out with and without the washed up data. I will let you know when i get it but be aware i leave in the UK so now i need the moon to go down to have no moonlight and then that has to happen at the same time that the sky is clear. We might be talking year 2025 for that to happen. 

 

The confusion only comes from people presenting technical arguments to both approaches on other forums. So when i was reading around it doesn't look the be a consensus of either to do it or no to do it :) 


  • Scott Mitchell likes this

#5 schmeah

schmeah

    Fly Me to the Moon

  • *****
  • Posts: 5265
  • Joined: 26 Jul 2005
  • Loc: Morristown, NJ

Posted 17 September 2019 - 05:35 AM

Yes, but weight the washed out subs less. Most stacking software will allow you to do this. If not try to stack x number of bad frames, and if that unprocessed, uncalibrated stack looks better than a single good frame to your eye, use it as a single frame in the good stack. Look forward to seeing your attempt on this.

 

Derek



#6 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3995
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 17 September 2019 - 06:05 AM

The confusion only comes from people presenting technical arguments to both approaches on other forums. So when i was reading around it doesn't look the be a consensus of either to do it or no to do it smile.gif

That's because there isnt an absolute dont do it or do do it.

 

As I say and others have said, the correct answer is to optimally weight each sub, or each night of subs. Any other approach is sub-optimal, and may or may not be better than simply chopping out swathes of data, which is certainly sub-optimal. There are no absolute rules because it depends on the relative quantity and quality of the subs you have to play with.


  • AhBok likes this

#7 PhotonHunter1

PhotonHunter1

    Viking 1

  • *****
  • Posts: 630
  • Joined: 19 Mar 2015
  • Loc: Northwest IL (for now)

Posted 18 September 2019 - 08:14 AM

The best approach (this is using PI) that I’ve found and incorporated into my processing is to use Blink and walk through the images. I remove the obvious bad frames ie, blurry, heavy clouds, etc. During my integration and calibration processing I use the SubFrameSelector process (not the script), and review the SNR, Eccentricity, FWHM values and can further vet the images, rejecting the extreme outliers. The SFS Process is incredibly efficient and fast in analyzing the images.

 

There’s a formula option that will then apply weights to each of the images, assigning lower weights to the less than optimum images and higher weights to the best images. PI does an excellent job in this area - it has allowed me to use images I would have otherwise discarded earlier in the process.

 

Here’s a link to my recent image that I’m working on where I incorporated what I would have initially classified and sub-optimal images. Very happy with the results. The only non-linear work done to date is slight noise reduction using TGV, and slight stretch using HT.

 

https://drive.google...ew?usp=drivesdk

 

Glad to share my processing notes if you’re interested.


  • psandelle and TrustyChords like this

#8 BoskoSLO

BoskoSLO

    Explorer 1

  • -----
  • Posts: 99
  • Joined: 04 Jul 2018
  • Loc: North Slovenia

Posted 18 September 2019 - 10:14 AM

I use this formula for SubFrameSelector weighting:

(15*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 15*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 20*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin))+50

 

Found it on PI tutorials and seems to work fine? I think using the image weight feature you can go by stacking the lesser quality frames too without compromising the end results but I always discard the lesser quality frames anyway for optimal results.


  • bmhjr likes this

#9 PhotonHunter1

PhotonHunter1

    Viking 1

  • *****
  • Posts: 630
  • Joined: 19 Mar 2015
  • Loc: Northwest IL (for now)

Posted 18 September 2019 - 10:59 AM

I use the same formula although with differing weights, favoring SNR - 15, Eccentricity- 15, and FWHM - 10 I recommend the op experiment with the ratios to find your recipe. I also follow cfosters approach of adjusting the “+50” at the end of the equation so that the highest weighted frame = 100. 



#10 Noobulosity

Noobulosity

    Ranger 4

  • *****
  • Posts: 393
  • Joined: 10 Jan 2018
  • Loc: Loveland, CO

Posted 18 September 2019 - 11:08 AM

I'm using DSS for stacking, so I don't think I have the option of weighting certain data sets.  However, I've tested using light-polluted data with dark site data on Andromeda (M31), and it didn't help much at all to have the extra 45min of data.  It may have made it worse just a touch.

 

But, if PI can weight the separate sets of images, then it may contribute a but.  I can only suggest trying it and see how it turns out.  If you get interesting results, I'd love to see how the two comparisons turned out.  smile.gif



#11 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3995
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 19 September 2019 - 10:58 AM

I'm using DSS for stacking, so I don't think I have the option of weighting certain data sets.  However, I've tested using light-polluted data with dark site data on Andromeda (M31), and it didn't help much at all to have the extra 45min of data.  It may have made it worse just a touch.

 

But, if PI can weight the separate sets of images, then it may contribute a but.  I can only suggest trying it and see how it turns out.  If you get interesting results, I'd love to see how the two comparisons turned out.  smile.gif

You do to some extent...stack the good and bad data separately, and then scale the inferior stack downwards in an external package (like FITSWORK), and then stack the resultant stacks. Ive done that as I also use DSS. Having said that, this issue might be what moves me over to PI.


  • Noobulosity likes this

#12 Noobulosity

Noobulosity

    Ranger 4

  • *****
  • Posts: 393
  • Joined: 10 Jan 2018
  • Loc: Loveland, CO

Posted 19 September 2019 - 05:10 PM

You do to some extent...stack the good and bad data separately, and then scale the inferior stack downwards in an external package (like FITSWORK), and then stack the resultant stacks. Ive done that as I also use DSS. Having said that, this issue might be what moves me over to PI.


Didn't even know that was an option. Good to know, thanks!
  • happylimpet likes this

#13 elNabo

elNabo

    Lift Off

  • -----
  • topic starter
  • Posts: 3
  • Joined: 04 Nov 2018

Posted 20 September 2019 - 04:29 AM

The best approach (this is using PI) that I’ve found and incorporated into my processing is to use Blink and walk through the images. I remove the obvious bad frames ie, blurry, heavy clouds, etc. During my integration and calibration processing I use the SubFrameSelector process (not the script), and review the SNR, Eccentricity, FWHM values and can further vet the images, rejecting the extreme outliers. The SFS Process is incredibly efficient and fast in analyzing the images.

 

There’s a formula option that will then apply weights to each of the images, assigning lower weights to the less than optimum images and higher weights to the best images. PI does an excellent job in this area - it has allowed me to use images I would have otherwise discarded earlier in the process.

 

Here’s a link to my recent image that I’m working on where I incorporated what I would have initially classified and sub-optimal images. Very happy with the results. The only non-linear work done to date is slight noise reduction using TGV, and slight stretch using HT.

 

https://drive.google...ew?usp=drivesdk

 

Glad to share my processing notes if you’re interested.

This are good news indeed, at least i dont have to bin my data.

 

I use PI and i do everything you said with the blink and the SFS but what i dont know what to do is to have the different weights on the good and bad frames. I see some people posting formulas and you said you use different formula.

 

Can you please explain what formula do you use and above all why? ( as i understand there was a "standard" formula and you changed a bit so just wanted to understand the thought process).

 

I will try to provide some feed back when im done with it. it might take a while as I'm capturing data several night in a row. believe or not has been clear skies in england for 3 days in a row (probably last time this happened jesus was still walking on earth) so i have to take it out as much as i can. This is also the reason of the washed out data. usually i wait for clear skies and less that 25% moon even for narrowband but in england you dont have that luxury whenever its clear you go out or else you might be waiting until 2025. :)



#14 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3995
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 20 September 2019 - 05:44 AM

With regard to weighting, I would concentrate on the SNR terms and ignore all of the FWHM stuff, as thats what we're talking about here with higher backgrounds.

 

You could simply base the weighting on the absolute value of the background, assuming PI gives you that. The noise will go as the square root of the background level, so SNR ~1/(sqrt(background level).

 

Not sure how to derive the weighting from the SNR but someone here will!



#15 PhotonHunter1

PhotonHunter1

    Viking 1

  • *****
  • Posts: 630
  • Joined: 19 Mar 2015
  • Loc: Northwest IL (for now)

Posted 21 September 2019 - 09:37 PM

This are good news indeed, at least i dont have to bin my data.

 

I use PI and i do everything you said with the blink and the SFS but what i dont know what to do is to have the different weights on the good and bad frames. I see some people posting formulas and you said you use different formula.

 

Can you please explain what formula do you use and above all why? ( as i understand there was a "standard" formula and you changed a bit so just wanted to understand the thought process).

 

I will try to provide some feed back when im done with it. it might take a while as I'm capturing data several night in a row. believe or not has been clear skies in england for 3 days in a row (probably last time this happened jesus was still walking on earth) so i have to take it out as much as i can. This is also the reason of the washed out data. usually i wait for clear skies and less that 25% moon even for narrowband but in england you dont have that luxury whenever its clear you go out or else you might be waiting until 2025. smile.gif

elNabo,

Here's the expression and weights I like to use:

(10*(1-(FWHM-FWHMMin)/(FWHMMax-FWHMMin)) + 15*(1-(Eccentricity-EccentricityMin)/(EccentricityMax-EccentricityMin)) + 25*(SNRWeight-SNRWeightMin)/(SNRWeightMax-SNRWeightMin)) + 50

 

One of the many great features with the SFS Process is that the program determines the values for the expression properties (FWHMMin, etc.) rather than you finding the values and then inputting them manually.

 

SNR is king, in my opinion, but Eccentricity is important to me as well since this tells me how round my stars are. I recently started changing the +50 at the end of the expression following a process suggested by cfosters. Find the image with the highest "weight" score and subtract that from 100, then add that difference to the 50. What this does is bring the highest weighted stars score to 100. Example: The highest weight score is 96.703. 100-96.703=3.297 Change 50 to 53.297.

 

Change the Routine to Output Subframes and then look at the FITS header for one of your images and you'll see the weighted score. I use the keyword SFSWEIGHT.

Attached Thumbnails

  • fits hdr.png

Edited by PhotonHunter1, 21 September 2019 - 09:39 PM.



CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics