Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

A little test with the ASI1600 and sparse dithering...

  • Please log in to reply
49 replies to this topic

#1 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 03 October 2016 - 07:01 PM

Many of us who are looking to take advantage of shorter sub exposures for resolution and detail with the ASI1600 cameras have also been trying to dither every N subs, rather than every single sub. Thankfully, this is now possible with SGP, and if you were using FireCapture or SharpCap before, it was possible with them as well since dithering was manual anyway. I've had some semi-crappy nights recently that allowed me to get some decent subs, but which mostly just turned into nights of fiddling and testing due to thin clouds. On one of the objects I imaged, Ghost of Cass, I decided to see how far I could push the "N subs" aspect, and how small I could make my dithers. I dithered every 15-20 subs (I got lazier as the night went on, and ended up dithering every 20 subs :p), and eventually reduced my dithering from aggressive (5) with scale of 3 in PHD3 to moderately aggressive (4) with scale of 2. 

 

Well, when you stack around 100-150 subs, dithering moderately aggressively every 15-20 seems to be insufficient. I've explored the available rejection options in PI, without much change in the final results. And the results have shown some small amounts of correlated noise as well as some "pitting" (black or very low signal pixels) which is probably due to too many dead pixels stacking in the same place:

 

InsufficientDithering.jpg

 

You can see the pitting pretty easily. You should also be able to see a bit of repeated medium-frequency pattern, and a very light bit of banding through the middle. The correlated noise is more apparent at native pixel size:

 

InsufficientDithering2.jpg

 

So, if you are only stacking a couple hundred subs or less, it's probably best to dither more frequently than 15 frames. It is also probably wise to dither more aggressively...or at least make sure your dither scale setting is sufficient. I moved back to dithering aggressively (5) with a scale of 3 in PHD2 which seems to be better...however I am beginning to wonder if even more aggressive dithering would be best. 


  • AndreyYa and tolgagumus like this

#2 Ken Sturrock

Ken Sturrock

    Soyuz

  • *****
  • Moderators
  • Posts: 3530
  • Joined: 26 Sep 2009
  • Loc: Denver, CO

Posted 03 October 2016 - 07:18 PM

Interesting, Jon.

 

Also, if you're shooting monochrome and depending on software, you can also spread those dithers out across the filters so that the images in each color channel are more aggressively dithered compared to each other but there's still less dithering over-all.



#3 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 03 October 2016 - 08:51 PM

Interesting, Jon.

 

Also, if you're shooting monochrome and depending on software, you can also spread those dithers out across the filters so that the images in each color channel are more aggressively dithered compared to each other but there's still less dithering over-all.

Hmm...so, you mean, get an Ha, OIII and SII, then dither, then get all three again? Or say get two L's, one each RGB, dither, and repeat? 



#4 ChrisWhite

ChrisWhite

    Surveyor 1

  • *****
  • Posts: 1636
  • Joined: 28 Feb 2015
  • Loc: Colchester, VT

Posted 03 October 2016 - 09:29 PM

Jon.  I recently dithered every 4 frames for RGB with 80 frames each and could not detect any correlated noise or artifacts as you describe.  I also dithered every 10 frames for L and had 450 frames.  Same good result.

 

Still lose a fair amount of imaging time, but its not a deal breaker. 

 

Also I am dithering radially.  Were you doing that, or just random?



#5 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 03 October 2016 - 09:35 PM

I have been doing random so far. I saw the spiral option in PHD...I'll give that a try next time it's clear. I keep forgetting to do that. :p 

 

Every 10 frames for 450 subs sounds about right. You might be able to pull off 15 with the spiral dithering...not sure. I don't think I would go to 15 unless I was getting around 500 subs, and I wouldn't go to 20 unless I was getting thousands of subs. For most people, I don't foresee them using more than a few hundred frames per stack, so I think 10 is about as far as most people would want to push it for sparse dithering.

 

Now I'm really curious if spiral dithering will help. I've noticed some changes recently in my data...it looks just a little scratchier. Like here are more hot pixels. But my imaging temps are the same, -20C. Not sure why...but in some of my recent test stacks, I'm starting to see a bit more correlated noise, which I hate. 



#6 Ken Sturrock

Ken Sturrock

    Soyuz

  • *****
  • Moderators
  • Posts: 3530
  • Joined: 26 Sep 2009
  • Loc: Denver, CO

Posted 03 October 2016 - 09:45 PM

Yeah. Exactly. I described what I do here.



#7 glend

glend

    Viking 1

  • *****
  • Posts: 912
  • Joined: 04 Feb 2014

Posted 04 October 2016 - 04:32 AM

Jon i have in the past used dithering when imaging with my mono dslr (which had a bad column due to the debayering processing), but i can not see the need with the ASI1600MM-C. Sorry for my ignorance, but what exactly are you hoping to achieve with this dithering test?



#8 SimmoW

SimmoW

    Explorer 1

  • -----
  • Posts: 54
  • Joined: 18 Apr 2014

Posted 04 October 2016 - 05:47 AM

Yet another useful post Jon.

When I used very low length exposures, even without dithering, I saw similar dark pixels. I managed to totally remove by using Cosmetic Correction in PI even after stacking. Try it, tweak the hot/cold pixel rejection settings.

#9 Chris Ryan

Chris Ryan

    Sputnik

  • -----
  • Posts: 44
  • Joined: 08 Mar 2014
  • Loc: Brisbane

Posted 04 October 2016 - 06:23 AM

I've been doing it like Ken mentions - swap filters, then dither after each filter has gone through, so L R G B, dither, repeat.  The weather and time has generally not been good enough that I can "risk" doing 1 channel at a time, I need to have a chance at getting some data to process.

 

However, I may consider jumping to dithering every 2nd cycle to help speed things up a bit.



#10 bigjy989

bigjy989

    Sputnik

  • -----
  • Posts: 33
  • Joined: 23 Nov 2010
  • Loc: Saline, MI

Posted 04 October 2016 - 07:16 AM

Jon,

Are you noticing the fpn will drift or is it repeatable over several weeks? Using at least 128 dark, bias and flats have eliminated the fpn (and pitting) for me. However, if I ever change gain my bias frames will be different when I return to the same setting so I will have to recalibrate the master dark and flat.

One point, for these cmos sensors I do not see the same pixel QE (some will be lighter or darker than their neighbors in calibrated flat frames 85%-115%). Therefore calibrated flat frames are essential to correct for pixel to pixel QE variation.

#11 bigjy989

bigjy989

    Sputnik

  • -----
  • Posts: 33
  • Joined: 23 Nov 2010
  • Loc: Saline, MI

Posted 04 October 2016 - 07:22 AM

Also in my bias frames I think I have been using a much higher offset value than most to get the bias full bell curve above the floor.
Unity - 35
200 - 70
250 - 110

Edited by bigjy989, 04 October 2016 - 07:23 AM.


#12 ChrisWhite

ChrisWhite

    Surveyor 1

  • *****
  • Posts: 1636
  • Joined: 28 Feb 2015
  • Loc: Colchester, VT

Posted 04 October 2016 - 07:31 AM

Jon i have in the past used dithering when imaging with my mono dslr (which had a bad column due to the debayering processing), but i can not see the need with the ASI1600MM-C. Sorry for my ignorance, but what exactly are you hoping to achieve with this dithering test?

 

If you don't dither with this camera you will end up with a very strong correlated noise pattern.  Several people have posted images illustrating this.  It is VERY bad, and dithering is a must.  The question is, how little dithering can we get away with?  With such a low noise/high sensitivity camera, very short exposures are in order.  Dithering every frame results in high loss in dark sky time. 



#13 spokeshave

spokeshave

    Viking 1

  • -----
  • Posts: 829
  • Joined: 08 Apr 2015

Posted 04 October 2016 - 09:33 AM

Jon:

 

What rejection algorithm were you using?

 

I've been giving some thought to how to develop a rule of thumb for sparse dithering (I like that term).  If we take the simple case of dithering every 20 frames and collecting 100 frames total, that means for each pixel stack, the outliers will appear 20 times with the remaining 80 pixels comprising a roughly Poisson distribution about the mean. As a first approximation from the perspective of sigma clipping rejection, that would be roughly the same as stacking 5 frames that were dithered every frame - i.e. for every four pixels in the stack, there is one outlier. PI recommends a minimum of 10 to 15 frames for sigma-clipping to be most effective, so if we take 10 as the minimum acceptable ratio, then for 100 frames, you would want to dither no more than every 10 frames. To get to the 15 frame ratio, you would need to dither every 7 frames or so. So, it seems that a starting rule of thumb would be to take the number of frames to be collected and divide by at least 10 to get the number of skipped frames for dithering. So, if you want to collect 200 frames, you can dither every 20 frames to get the same level of sigma clipping rejection as if you had collected 10 frames.

 

But I don't think that tells the whole story. Windsorizing can make a big difference, I think, when using sparse dithering. Windsorizing rejects outliers in the pixel stack and replaces them with the nearest neighbor prior to calculating the mean and sigma. The effect is not particularly pronounced nor much different from regular sigma clipping under the typical situation when there is only one outlier pixel in the stack. In that case, the mean and Windsorized mean are not all that different because the contribution by the one outlier is not significant for relatively large stacks. But with sparse dithering, the outliers appear multiple times in the stack and can substantially affect the mean and therefore the calculation of sigma. The Windsorized mean will be much more representative of the actual (with no outliers) mean. So, I think Windsorized sigma clipping is a must for sparse dithering.

 

Also, even though as a first approximation 200 frames dithered every 20 frames will appear to sigma clipping to be the same as 10 frames dithered every frame, I think that is a very rough "first approximation". The mean will still be calculated using 200 data points, not 10, so the uncertainty in the mean drops considerably, and even more so with Windsorization. With a less uncertain mean, the rejection thresholds can be tightened considerably without the risk of throwing out good data. I haven't thought of how the math would work out yet, but my sense is that in a stack of 200 frames dithered every 20 frames, sigma clipping (with Windsorization) and tightened sigma low and sigma high values will give much better rejection than would be apparent in a stack of only 10 images.

 

So the bottom line of my thought experiment is that a very general rule of thumb would be to select the number of images to be skipped to be no more than 10% of the number of subs to be collected. I would also add that sparse dithering should not be used at all for small numbers of subs (say < 20 or so) and that for larger numbers of subs (say > 50 or so) Windsorized sigma clipping should be used and the sigma low and high numbers should be tightened, or at least experimented with.

 

Note that these meanderings don't consider at all the added SNR gained from more subs gained from less dithering overhead losses. Even if the rule of thumb is dropped to 5% of the number of subs, the savings in dithering overhead can be substantial. For 200 subs, that would mean dithering every 10 subs, resulting in a 90% reduction in dithering overhead.

 

Thoughts?

 

Tim


Edited by spokeshave, 04 October 2016 - 11:06 AM.

  • Jon Rista likes this

#14 exmedia

exmedia

    Viking 1

  • *****
  • Posts: 607
  • Joined: 26 May 2013
  • Loc: Orange County, CA

Posted 04 October 2016 - 10:06 AM

"And the results have shown some small amounts of correlated noise as well as some "pitting" (black or very low signal pixels) which is probably due to too many dead pixels stacking in the same place"

 

Jon, can you expand a bit on pitting, or point me in a direction where I can find out more?  I've seen a lot of it in some of my captures, but don't know how to eliminate them.

 

Thanks,

 

Richard



#15 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 04 October 2016 - 11:45 AM

Jon i have in the past used dithering when imaging with my mono dslr (which had a bad column due to the debayering processing), but i can not see the need with the ASI1600MM-C. Sorry for my ignorance, but what exactly are you hoping to achieve with this dithering test?

Glen, it may be subtle, but there is a medium scale mottling/orange skinning effect in the second image that is only there because of the little bits of slightly correlating pattern noise (I have done no NR, so the orange skinning effect is intrinsic to the integration). I was using a master dark that was about two months old, however I just replaced it, and I still seem to be having a little bit of correlated noise issues, which indicates that there is a small amount of remnant pattern after calibration. Note that these are 90 second 3nm narrow band subs...so, there is data that is read noise limited, which would be subject to pattern noise in the bias and dark signals. 

 

I am honestly not sure exactly what pattern is correlating, as the banding, glows and hot pixels all do seem to be getting removed by the darks. The dark pitting, I believe that is still there because they are dark pixels...if you are at or near zero, subtracting a dark doesn't actually fix that...at least, that has been my experience. The only way I know to correct the pitting is to dither enough and tighten up the low rejection factor when integrating. The main reason we dither in the first place is to offset pattern so rejection algorithms can identify and eliminate outlier pixels. 

 

What I was doing the other night and demonstrating here is that there are clear limits on how far you can push sparse dithering (I'm coining a new term here :p), or dithering every N subs, even when you stack a couple hundred of them. The examples in the first post were dithered every 15-20 subs, which is clearly dithering too infrequently for it to be fully effective in eliminating any remnant pattern in calibrated subs.

 

To bigjy's point, I may try increasing my offset from 50. I am getting a very small amount of pixels that hit zero when imaging at gain 200, and a larger offset would help avoid that. 



#16 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 04 October 2016 - 12:07 PM

Jon,

Are you noticing the fpn will drift or is it repeatable over several weeks? Using at least 128 dark, bias and flats have eliminated the fpn (and pitting) for me. However, if I ever change gain my bias frames will be different when I return to the same setting so I will have to recalibrate the master dark and flat.

One point, for these cmos sensors I do not see the same pixel QE (some will be lighter or darker than their neighbors in calibrated flat frames 85%-115%). Therefore calibrated flat frames are essential to correct for pixel to pixel QE variation.

Well, I'm not really sure how much drift there really is. There seems to be some truly fixed components to the FPN, and some semi-fixed components. When I was imaging with just gain 200 offset 50, I used a single 25-frame master dark for about two months. It seemed to work fine, although that was paired with aggressive dithering at a scale of 3. Keep in mind, I was purposely trying to find the limits for sparse dithering, so I kept pushing the number of frames between each dither larger and larger, and I kept reducing the aggression of my dithers in PHD, to see how sparse I could get. My original post is just an example to demonstrate that even if you do sparse dithering, you gotta make sure that you aren't being too sparse. Even if you stack hundreds of frames, you still need to dither often enough to make sure  your offsetting any patterns enough that rejection algorithms can indeed identify outlier pixel values and reject them. 

 

I have noticed that there are some smaller patterns that do seem to change when I change gain. I was stuck at gain 200 offset 50 for a while, but recently I've been experimenting with finding the optimal setting for best SNR. I've been imaging at 300, 200, 139 and 75, with various different offsets. Whenever I change the gain and offset, there do seem to be small aspects of the bias signal that change, mostly that faint glow-like gradient. The vertical banding seems to largely remain the same, though.

 

The differences in pixel Q.E. that you are seeing is called PRNU, or Pixel Response Non-Uniformity. There is absolutely PRNU with all cameras. I don't know if it is more significant with the ASI1600, or if it is simply the fact that with higher gain settings, our object signals can actually become a good deal brighter than with the average CCD. PRNU becomes a more significant problem as the signal gets brighter, and I know that on many objects I've imaged, I could see the object signal in my unstretched subs. That indicates a very bright signal which would be more subject to PRNU, and flats one way to correct that. I don't know how much structured pattern may be in PRNU...vs. how much it may just seem more like random noise. In either case, dithering should also help with the PRNU as well, if you aren't doing flat calibration. (At the moment, my field is clean enough and has minimal vignetting that I've been calibrating only with the master dark.) 

 

 

Also in my bias frames I think I have been using a much higher offset value than most to get the bias full bell curve above the floor.
Unity - 35
200 - 70
250 - 110

I am planning to increase my offset. I've been using an offset of 50 at gain 200, 21 at unity, and 12 at 75. I haven't changed my offset yet, as I am still acquiring data on objects. (I have a very limited window of opportunity, so I can't get much more than 3 hours on an object each night, and some of the objects I am imaging are clearly requiring more than even 6 hours, so I acquire a couple of hours of data on each object each night, over many, many nights...long process...) I actually just upped my offset to 15 at gain 75 with my most recent object (I re-imaged CTB-1 with 600s subs for comparison to my short-exposure version.) That seems to be sufficient to get full separation of the signal from the left wall, and I am not sure that any additional offset will offer any further benefits. At gain 139, for brighter objects the default offset of 21 does seem insufficient, so I'll probably increase it once I start a new object at that gain and see how it goes. I am planning on using an offset of 60-70 for gain 200. Hopefully that will limit the number of dark/black pixels that can't be corrected with darks, but I don't know if that will really do much for the bias or dark patterns. I still think the only way to truly eliminate all of that is to use dithering (even when you do use proper dark calibration.)



#17 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 04 October 2016 - 12:31 PM

Jon:
 
What rejection algorithm were you using?


I've been using Winsorized Sigma and Linear Fit. I like what LFit clipping does for my stars, but I'm not sure I fully understand how to optimize it, and if I tweak the sigma settings, sometimes I get funky results. I'm still learning LFit clipping. With Winsorized, I've fiddled a lot with the sigmas. I've used the defaults, 4/2 low/high, as well as 4/3, 4/4, 3/4, and 3/3.5. I bring up the low clipping to try and correct more of the dark pixels, as dark subtraction seems incapable of fixing cold or clipped dark pixels.
 

I've been giving some thought to how to develop a rule of thumb for sparse dithering (I like that term).  If we take the simple case of dithering every 20 frames and collecting 100 frames total, that means for each pixel stack, the outliers will appear 20 times with the remaining 80 pixels comprising a roughly Poisson distribution about the mean. As a first approximation from the perspective of sigma clipping rejection, that would be roughly the same as stacking 5 frames that were dithered every frame - i.e. for every four pixels in the stack, there is one outlier. PI recommends a minimum of 10 to 15 frames for sigma-clipping to be most effective, so if we take 10 as the minimum acceptable ratio, then for 100 frames, you would want to dither no more than every 10 frames. To get to the 15 frame ratio, you would need to dither every 7 frames or so. So, it seems that a starting rule of thumb would be to take the number of frames to be collected and divide by at least 10 to get the number of skipped frames for dithering. So, if you want to collect 200 frames, you can dither every 20 frames to get the same level of sigma clipping rejection as if you had collected 10 frames.
 
But I don't think that tells the whole story. Windsorizing can make a big difference, I think, when using sparse dithering. Windsorizing rejects outliers in the pixel stack and replaces them with the nearest neighbor prior to calculating the mean and sigma. The effect is not particularly pronounced nor much different from regular sigma clipping under the typical situation when there is only one outlier pixel in the stack. In that case, the mean and Windsorized mean are not all that different because the contribution by the one outlier is not significant for relatively large stacks. But with sparse dithering, the outliers appear multiple times in the stack and can substantially affect the mean and therefore the calculation of sigma. The Windsorized mean will be much more representative of the actual (with no outliers) mean. So, I think Windsorized sigma clipping is a must for sparse dithering.


I see where you are going with this. A couple of things, then. From the documentation on Winsorized Sigma Clipping:

http://pixinsight.co..._sigma_clipping

It states the following in the last paragraph of that section:
 

Winsorized sigma clipping is an excellent pixel rejection algorithm for relatively large sets of 15 or more images. For more than 20 images, this algorithm yields significantly better results than sigma clipping consistently in all of our tests.

 

So, perhaps, instead of using 10 as a baseline, we should use 20. In which case, if you stack 100 frames, you would want to dither every 5. If you stack 200 frames, you would want to dither every 10, etc. That would jive with my actual experiences...I think dithering every 20 frames for around 200 frames stacked is too sparse... 

 

Also, even though as a first approximation 200 frames dithered every 20 frames will appear to sigma clipping to be the same as 10 frames dithered every frame, I think that is a very rough "first approximation". The mean will still be calculated using 200 data points, not 10, so the uncertainty in the mean drops considerably, and even more so with Windsorization. With a less uncertain mean, the rejection thresholds can be tightened considerably without the risk of throwing out good data. I haven't thought of how the math would work out yet, but my sense is that in a stack of 200 frames dithered every 20 frames, sigma clipping (with Windsorization) and tightened sigma low and sigma high values will give much better rejection than would be apparent in a stack of only 10 images.
 
So the bottom line of my thought experiment is that a very general rule of thumb would be to select the number of images to be skipped to be no more than 10% of the number of subs to be collected. I would also add that sparse dithering should not be used at all for small numbers of subs (say < 20 or so) and that for larger numbers of subs (say > 50 or so) Windsorized sigma clipping should be used and the sigma low and high numbers should be tightened, or at least experimented with.
 
Note that these meanderings don't consider at all the added SNR gained from more subs gained from less dithering overhead losses. Even if the rule of thumb is dropped to 5% of the number of subs, the savings in dithering overhead can be substantial. For 200 subs, that would mean dithering every 10 subs, resulting in a 90% reduction in dithering overhead.
 
Thoughts?
 
Tim

At the moment, I am leaning towards ~5%. A 90% reduction in dithering overhead is still wonderful! :p The one thing I don't want to do is recommend that users of this camera (or similar cameras) dither too sparsely such that they end up with the kind of results I did in my original post. To that end, it might be wise to keep it even simpler...dither every 1 sub up to 50, dither every 5 subs up to 100, dither every 10 subs up to 500, dither every 20 subs beyond that? At least then, noone has to calculate anything, then can just use a rule of thumb, and the rules would be "safe" in ensuring that they dither enough to get good results.

 

The pseudo-code for the Winsorized Sigma clipping algorithm is at the link I shared above, though. Maybe we can glean more from that and figure out a more optimal solution? Based on the code, it does appear that the median, as well as m0(m - 1.5*sigma) and m1(m + 1.5*sigma) are calculated before Winsorization (they would have to be, really). Winsorization is simply the clamping function, for pixel value ti less than m0 ti becomes m0, for pixel value ti greater than m1 ti becomes m1, otherwise tremains ti. After winsorization, the median, m, is calculated again from the winsorized set of pixel values, t (this is a set of pixel values derived from x, the original set). Then sigma clipping is performed against this winsorized median, rather than the original median. 


Edited by Jon Rista, 04 October 2016 - 12:32 PM.


#18 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 04 October 2016 - 12:48 PM

"And the results have shown some small amounts of correlated noise as well as some "pitting" (black or very low signal pixels) which is probably due to too many dead pixels stacking in the same place"

 

Jon, can you expand a bit on pitting, or point me in a direction where I can find out more?  I've seen a lot of it in some of my captures, but don't know how to eliminate them.

 

Thanks,

 

Richard

Well, if you think about dark calibration. That is the subtraction of a master dark frame from a light frame. If you have a cold pixel...then it's cold in both the dark and the light. Subtract 0 from 0, you get 0. Subtract 1 from 1, you get 0. Subtract 3 from 3, you get 0. Even if there is a little bit of noise in that cold pixel, subtract 2 from 3 you get 1; subtract 3 from 2 you still get 0. So, dark calibration...isn't really sufficient to correct cold, dead, or very dark pixels.

 

The only thing that could really correct such pixels is sufficient dithering. In the case where you might have two cold pixels next to each other, if your dithering is sparse, and not large enough in terms of pixel offset, you could end up moving the cold pixels by only 1 pixel from one batch of frames to the next. It would then still be possible for two dark pixels to correlate across a set of say 40 frames. Furthermore, if you continue to dither insufficiently, you could end up correlating the same cold pixels across more sets of sparsely dithered subs. At which point the cold pixel could actually dominate the majority of the pixel stack, in which case it wouldn't be identified as an outlier...it would be close to the median instead. When I dropped down to aggression 4 scale 2 with my dithering, I was only getting about 1-2 pixel offsets each dither, and for a few dithers, it seemed to keep moving in the same direction. So, the dithering, considering it was very sparse and only done every 20 frames, was insufficient to ensure that those cold pixels were randomized enough over a large enough area of pixels to ensure proper rejection. 

 

The other factor here is, low sigma is usually a lot larger than high sigma for the rejection algorithms in PI. By default, Winsorized is 4 sigmas, Linear Fit is 5 sigmas. That is actually pretty significant, four sigmas, at least with a normal distribution, means that you are interested in keeping over 99.9% of your pixels in the stack! So it doesn't actually take much to KEEP a cold pixel around when doing sigma rejection. Not with the default settings in PI, anyway. If you are having problems with cold pixels sticking around, then on top of more aggressive dithering, you might also want to tighten up your low sigmas. Even a low sigma of 3 standard deviations with a normal distribution is going to keep 99.7% of the pixels, and 2 standard deviations would keep over 95% of the pixels. I do not know exactly what the average pixel distribution in a stack looks like...it may not be normal, probably isn't quite a normal (Gaussian) distribution when you have fewer frames in the stack, but once you get up to hundreds or so, then the pixel distributions in each stack might be more normal. Anyway, tightening up the low sigma will help you reject more cold pixels, assuming you have dithered effectively enough to randomize them enough. Proper dithering is still important, though, as if your cold pixels are correlating too much through the stack, they could easily govern the mean, in which case they would never be rejected (probably not even with a low sigma of 1). 



#19 spokeshave

spokeshave

    Viking 1

  • -----
  • Posts: 829
  • Joined: 08 Apr 2015

Posted 04 October 2016 - 12:50 PM

Jon:

 

I'm inclined to agree that 5% is probably a good general rule of thumb. I offered 10% as the absolute maximum sparseness that could be used and still get somewhat effective sigma clipping. I don't think we need to complicate things further by adding tiers. Just pick "X" for dithering every "X" frames to be 5% of the number of subs you are going to collect. If you are collecting 20 subs, you should dither every frame. If you are collecting 50 subs, dither every 2 frames (OK the rule should say "round down"), etc.

 

I have some more thoughts on how to make Windsorized sigma clipping work better but don't have time to get into it right now. Hopefully, I can pull some thoughts together this evening.

 

Tim


  • Jon Rista likes this

#20 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 04 October 2016 - 01:03 PM

Jon:

 

I'm inclined to agree that 5% is probably a good general rule of thumb. I offered 10% as the absolute maximum sparseness that could be used and still get somewhat effective sigma clipping. I don't think we need to complicate things further by adding tiers. Just pick "X" for dithering every "X" frames to be 5% of the number of subs you are going to collect. If you are collecting 20 subs, you should dither every frame. If you are collecting 50 subs, dither every 2 frames (OK the rule should say "round down"), etc.

 

I have some more thoughts on how to make Windsorized sigma clipping work better but don't have time to get into it right now. Hopefully, I can pull some thoughts together this evening.

 

Tim

Sounds good. Simpler is better. ;)



#21 spokeshave

spokeshave

    Viking 1

  • -----
  • Posts: 829
  • Joined: 08 Apr 2015

Posted 06 October 2016 - 02:27 PM

I have some more thoughts on how to make Windsorized sigma clipping work better but don't have time to get into it right now. Hopefully, I can pull some thoughts together this evening.

 

Tim

OK. I finally have a chance to get back to this. To put things bluntly, I think we can do pixel rejection better. I'll get to why in a minute. First a little background. Part of what I do for a living is radiation detection and counting. There is a whole lot of crossover between that and astrophotography. Both deal with Poisson noise. Both are trying to separate a desired signal from that noise. Both deal with confounding signals from other sources like system electronics, etc. So, I may have some insight from a perspective not frequently seen.

 

Let's get back to sigma clipping rejection. Whether Winsorized (my apologies to Mr. Winsor for misspelling his name earlier) or not, the idea is to establish a central value that gets as close as possible to the "true" mean of the actual sky brightness for that pixel, establish some clipping criteria high and low and then reject pixels outside of those criteria when averaging the stack. The clipping criteria are defined as a multiple of the standard deviation (sigma) of the stack. Both sigma clipping and Winsorized sigma clipping are iterative processes that are intended to converge on the "true" signal distribution and identify and reject outliers.

 

With sigma clipping (note that this is for PI algorithms only) we start out with the full pixel stack, complete with outliers high and low that are not representative of the Poisson distribution of the signal. The routine selects the median (not the mean) from the raw stack, and calculated the standard deviation of the raw stack. Right away, there is a problem that can best be illustrated by an example. Lets pick an arbitrary dataset:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1500, 10000

 

It should be clear that the actual target distribution is centered around 500, with some clear outliers on each end. We'll run through a couple of iterations of sigma clipping on this dataset. To start, we need the median and standard deviation. The median is just the central value in the ordered set. Conveniently in this dataset, the median is 500. The standard deviation is a simple calculation and comes out to be 2156. Right away, we can see that there is a problem. The actual target distribution clearly starts at about 494 and ends up at about 506. If we do a quick standard deviation for what is clear to us is the standard deviation, we get about 4. That's a whole lot less than 2156. Clearly, the first shot at the standard deviation for sigma clipping is a terrible estimate. But let's keep going. The routine will take the median and standard deviation it just calculated (500 and 2156) and use the sigma high and low rejection numbers the users enters and goes through each pixel in the stack and tests it. We'll use 3 and 3 for the sigma high and low for this example and just test the pixels on each end, starting with 19. The test is (median - pixelvalue)/sigma for low rejection and (pixelvalue - median)/sigma for high rejection. So for the first pixel with a pixel value of 19, the calculation is (500 - 19)/2156 = 0.22. If this number is greater than 3 (sigma low) it gets rejected. Pixel one does not get rejected. Let's move on to pixel 20 with a pixel value of 10,000. The calculation is (10,000 - 500)/2156 = 4.4. This is greater than our sigma high value of 3, so it gets rejected as it clearly should. I've already done the math and no other pixels get rejected. So the new stack is:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1500

 

The routine takes this stack and starts a new iteration. Now, the median stays the same at 500, but the new standard deviation is 346 - clearly an improvement, but still nowhere near close to 4. If we run the rejection tests again, the only pixel that gets rejected is pixel #19 with a value of 1500. The new stack is:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000

 

Here again, the median and sigma are calculated: median = 499, sigma = 249. Still far from the 4 or so it should be but still better than previous iterations. But, here things get interesting. When the sigma high and sigma low tests are applied, no pixels are rejected. So, if we're not paying very close attention, the routine will finish here. It will have rejected the two brightest pixels, so may very well have eliminated the satellite trail or hot pixel in the stack, but the 18-pixel stack still contains 5 pixels that clearly should not be included. As a result, the stack that ends up in the final image has a sigma of 249, when it should have a sigma of around 4. That's terrible. The result will be what looks like shot noise, but is in fact poor pixel rejection. Of course, we can tweak the sigma high and low until we get a picture that looks better, but that is very subjective.

 

Let's move on to how Winsorized sigma clipping works. Let's start with the same dataset:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1500, 10000

 

Like before, a median and standard deviation are calculated: median = 500, sigma = 2156. Here, however, two different threshold values are calculated. The low threshold is:

 

Tlow = median - 1.5* sigma = -2734

 

And the high threshold is:

 

Thigh = median + 1.5*sigma = 3734

 

The 1.5 is a somewhat arbitrary factor hardwired into the PI routine. The routine uses these threshold values to test each pixel. If it is lower than the low threshold or higher than the high threshold, it gets replaced by the nearest value. So, after the first iteration, pixel #20 with a value of 10,000 get replaced with pixel #19's value of 1500. So the new stack is:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1500, 1500

 

Then the next iteration begins with a median of 500 and  a sigma of 407. Here, PI does another operation by multiplying the sigma by 1.134 - another hardwired number. The resulting sigma is 462, and Tlow and Thigh are -193 and 1193. The next stack will look like:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1000, 1000

 

The next iteration results in no change. It is important to note here that we're not choosing the final stack with this process so far, we're just Winsorizing the set to calculate a better sigma. So, after Winsorizing the dataset, we get a median of 500 and a new sigma of 296. Note that the Winsorized sigma is actually worse than the sigma that the sigma clipping process settled on. This is an indication that 20 may be two few subs for Winsorizing to have an advantage. Clearly it does not in this case. At any rate, we now apply the sigma high and sigma low thresholds defined by the user (we used 3 and 3 above) to the original dataset. Original:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1500, 10000

 

And the reduced dataset after sigma clipping:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000

 

Note that the final stack is identical to the sigma clipping stack. So, for this dataset, there is no difference whatsoever between sigma clipping and Winsorized sigma clipping. Furthermore, both algorithms left a large number of outliers in the stack and ended up with a stack having a very high sigma - nearly two orders of magnitude higher than it should be.

 

So, what can we do? Of course, we can tweak settings like I mentioned above. Tighter sigma values would certainly help, but that is a bit arbitrary. So, how does this relate to radiation detection? Well, we have a similar problem in that field. We need to separate background radiation from source radiation. The analogy to pixel rejection is that the source radiation is the outlier, and background radiation is the Poisson distribution we are trying to separate it from. Rather than applying sigma clipping rejection to identify the outliers, we use confidence intervals. The background distribution is a true Poisson distribution and we can establish confidence intervals that give us the probability of a data point being within the distribution or not. We also take advantage of the convenient fact that the standard deviation of a Poisson distribution is simply the square root of the mean. There is a relatively famous equation by Currie and Brodsky that allows us to establish whatever confidence intervals we want and all we need to know is the mean of the background and the count time - which can be represented by number of subs if the subs are the same length.

 

In terms of stacking subs, we can't count the background with full confidence of excluding the source like we can in radiation detection. But just as PI does, we can fairly accurately guess at the mean by selecting the median. The more subs we get, the closer the median will be to the mean, and the median is largely immune to outliers if the number of outliers is small compared to the number of data points in the target distribution. So, I'll use the median as a surrogate for the mean. The only other thing we need is the z-score for the confidence interval we desire. I think I would like to have 95% confidence that the pixels I keep are within the target distribution. The z-score for a two-tailed 95% confidence interval is 1.96. Adapting the Currie equation to our needs, I get:

 

Delta = 1.96*sqrt(2*N*median)/N

 

As you can see, delta depends only on the median and the number of subs (N). Pixel rejection would simply be those pixels that lie outside of median +/- delta. Let's see how that works for our dataset:

 

median = 500

N = 20

delta = 14

 

So our rejection thresholds are 486 low and 514 high. Let's see what that does to our dataset. Here is the original:

 

19, 20, 21, 22,  494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506, 1000, 1500, 10000

 

And here is the "confidence clipped" (I think I just coined a phrase too) dataset:

 

494, 495, 496, 497, 498, 500, 500 501, 501, 501, 504, 505, 506

 

As you can see, this simple, non-iterative approach very effectively eliminated all outliers and retained the target distribution set perfectly. We will only be stacking target distribution pixels and look at the sigma of our new stack - it's 3.85. Compare that to the sigma for sigma clipping (249) and Winsorized sigma clipping (296).

 

As stacks get larger, the statistics only improve as you can see by the relationship that N has to the equation.

 

I'm going to go ahead and be a bit more blunt. I think we're doing pixel rejection wrong. I think this is the way to do it.

 

Thoughts?

 

Tim


Edited by spokeshave, 06 October 2016 - 02:36 PM.

  • Ken Sturrock, pedxing and Jon Rista like this

#22 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 06 October 2016 - 03:19 PM

Tim,

 

Only thought at the moment is...you operated on integer values for the pixels. I am wondering if PI first translates all pixel values into their floating point space first (something that seems to be a black box), and if they do...how does that affect the calculations? PI also employs normalization...and I wonder how that might affect results as well. 

 

I also think this should go onto the PI forums. If there is a simple, fast way to clip clear outliers like you have demonstrated, I would love to see that in a future version of PI. 


Edited by Jon Rista, 06 October 2016 - 03:29 PM.


#23 spokeshave

spokeshave

    Viking 1

  • -----
  • Posts: 829
  • Joined: 08 Apr 2015

Posted 06 October 2016 - 04:06 PM

Floating point should make no difference. I just picked integer values for simplicity of demonstration. Normalization not only won't hurt, it is an absolute necessity for this to work properly. I was planning on preparing something a bit more formal for presenting to the PI developers.

 

Tim



#24 Jon Rista

Jon Rista

    Hubble

  • *****
  • topic starter
  • Posts: 15752
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 06 October 2016 - 04:26 PM

Well, all I can say is, I can't wait for the "Confidence Clipping" option to appear in PI! :p

 

This may also explain my difficulty with pitting. When I have clear "low" outliers, I have really had a tough time rejecting them. I think I understand why now.


Edited by Jon Rista, 06 October 2016 - 04:48 PM.


#25 spokeshave

spokeshave

    Viking 1

  • -----
  • Posts: 829
  • Joined: 08 Apr 2015

Posted 06 October 2016 - 05:49 PM

Yeah, both sigma clipping routines do not handle low pixels very well since they tend to be a lot closer to the median than the hot ones. I was honestly surprised at how vague the sigma clipping routines can be and how well the "confidence clipping" seems to work. Of course, that's theory. I am curious to try it in practice. I wonder if anyone knows of any software that allows scripts like this to be developed. I don't have the coding skills to try to do it from scratch.

 

I submitted it to the PI forum. Maybe they'll put it in.

 

Fingers crossed.

 

Tim


  • Jon Rista likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics