Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Dark Subtraction and Spatial Filtering

  • Please log in to reply
17 replies to this topic

#1 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 15 January 2019 - 06:23 PM

I have been working through some of the implications of the digital spatial filtering and have come to a rather extraordinary conclusion.  For some cameras, master dark subtraction is totally ineffective because of the digital spatial filtering.  For these cameras, don't bother taking darks!

 

For a long time there has been anecdotal evidence that for some cameras calibration with darks gives no improvement to images.  I have always found this quite puzzling but I now think I know the reason why.

 

I'll take the Nikon D5500 as an example of this.  You may be familiar with the charts where I plot of pixel values against the maximum of neighbouring pixels.  Here is the chart for the Nikon D5500:

 

NikonD5500.png

 

To explain what this chart means, some understanding of the Bayer matrix is required.  A red pixel has 8 neighbours of the same colour in a 5x5 block of pixels centred on that pixel.  The same for a blue pixel - it has 8 neighbours of the same colour.  A green pixel has 12 neighbours of the same colour in a 5x5 block.  So in the plot above I have plotted every pixel value against the maximum of its neighbours of the same colour.

 

The pink line on the chart has slope=2 and it is very obvious that there is no pixel above that line.  In other words, allowing for the bias level of 500, no pixel has a value more than twice the maximum of it neighbours.  This is because of the "hot pixel suppression" spatial filtering algorithm that is applied to all exposures 0.25sec and longer.  Any pixel brighter than this has its value capped.

 

Now think about the darks we use for calibration.  We average darks together to create a master dark.  Some pixels are consistently brighter than others because they have a higher dark current.  This is known as the thermal fixed pattern noise (FPN).  The idea of the master dark is to allow this FPN to be subtracted from the light frames to remove the effect of those brighter pixels.

 

Now consider what happens when the camera performs spatial filtering on raw image data. Those brighter pixels in each dark frame tend to be isolated, so the effect of the spatial filtering is to cap their values down to match their same colour neighbours. When we average these dark frames together, the master dark will also have FPN that has been severely capped in value.

 

Intuitively you might think it's not a problem because those same bright pixels in the light frames will also have their values capped by the spatial filtering, so the subtraction of the master dark will work quite happily.  Unfortunately that's not the case.  I'll explain why by means of an example.

 

Typically you might expose your light frames so the peak of the back-of-camera histogram is a quarter of the way from the left hand side.  Ignoring the bias level, that will give a pixel value of around 1000 (for 14 bit camera like the D5500).  So the sky fog in this example has a level of 1000. There will be some photon shot noise associated with this and in addition some pixels will be brighter because of the FPN.  But will the spatial filtering cap these brighter pixels?  The answer is no.  We have already seen from the chart above that the D5500 spatial filtering does not cap a pixel value unless its value is more than twice the level of its neighbours. Since the neighbours have values of around 1000 then a pixel would need to reach a value of 2000 before it is capped.  Only the very brightest of the hot pixels will reach such a level.  In other words, the vast majority of the hot and warm pixels visible in the fixed pattern noise will not be capped in the light frames.  But they are capped in the dark frames.

 

A one line summary is the following:

Subtracting a master dark of capped FPN values from a light frame of uncapped FPN values will achieve nothing useful.

 

Note that the D5500 is just one example camera.  Unfortunately the spatial filtering algorithms vary from Nikon camera to Nikon camera and they differ from the Sony spatial filtering.  Each algorithm needs to be analysed individually to determine whether or not dark subtraction will be a waste of effort.  Having said that, even if it is a waste of time for removing FPN, dark subtraction is still useful for removing any amp-glow that a camera might have.

 

Mark


Edited by sharkmelley, 15 January 2019 - 06:55 PM.

  • bobzeq25 and fmeschia like this

#2 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 16056
  • Joined: 27 Oct 2014

Posted 15 January 2019 - 07:19 PM

Good analysis.  A few more numbers for you.

 

The bias level (14 bit) on my D5500 is 150 ADU.  The read noise is about 3 electrons, 10 X RN^2 is about 90 electrons.  I generally image at ISO 200, close to unity gain.  So I'd need about 90 ADU over bias, or about 240 ADU (14 bit).  That would be about 1000, in expressed as a 16 bit number.  Is that what you meant?

 

Minor point.  Never have seen a trace of amp glow, but my longest exposures are maybe 240" (light pollution).



#3 whwang

whwang

    Mercury-Atlas

  • *****
  • Posts: 2735
  • Joined: 20 Mar 2013

Posted 15 January 2019 - 09:11 PM

Hi Mark,

 

Great analyses.

 

Suppose the rule is that a pixel cannot be brighter than 2x its brightest neighbor.  I wonder if the pixel brightness is evaluated over the raw values of the pixels?  Or it is evaluated after subtracting the mean or median of the frame.  If it is the latter, then it may be still possible for the dark frames and light frames to roughly (not exactly) contain the same fixed pattern formed by hot pixels.

 

The above is for hot pixels.  Low-level fixed pattern in the dark should be much less affected by such filtering no matter in dark frames or light frames.  Do you agree?

 

Cheers,

Wei-Hao



#4 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 16 January 2019 - 01:21 AM

Good analysis.  A few more numbers for you.

 

The bias level (14 bit) on my D5500 is 150 ADU.  The read noise is about 3 electrons, 10 X RN^2 is about 90 electrons.  I generally image at ISO 200, close to unity gain.  So I'd need about 90 ADU over bias, or about 240 ADU (14 bit).  That would be about 1000, in expressed as a 16 bit number.  Is that what you meant?

 

Sorry, I don't follow your numbers.  My chart shows the bias level to be 500 (at ISO 1600 though admittedly it might be different at ISO 200).  I don't know where 10xRN^2 comes into it.  The key point I was making is what is the average level of the sky glow above the bias. It is this sky glow that "protects" the thermal FPN from being spatially filtered.

 

Mark


Edited by sharkmelley, 16 January 2019 - 01:32 AM.


#5 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 16 January 2019 - 01:30 AM

Hi Mark,

 

Great analyses.

 

Suppose the rule is that a pixel cannot be brighter than 2x its brightest neighbor.  I wonder if the pixel brightness is evaluated over the raw values of the pixels?  Or it is evaluated after subtracting the mean or median of the frame.  If it is the latter, then it may be still possible for the dark frames and light frames to roughly (not exactly) contain the same fixed pattern formed by hot pixels.

 

The above is for hot pixels.  Low-level fixed pattern in the dark should be much less affected by such filtering no matter in dark frames or light frames.  Do you agree?

 

All the evidence I've seen points to the fact that it is the raw pixel values are being used directly, without mean or median subtraction.

 

I agree that low level pattern in the darks will be less affected by the spatial filtering in the darks.

 

My explanation might possibly go some way to explaining the results of your excellent study here:

How good do your DLSR darks need to be?

But I haven't seen any D800 long exposure darks so I don't know which variety of spatial filtering is being used on that particular camera.

 

Mark



#6 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 16056
  • Joined: 27 Oct 2014

Posted 16 January 2019 - 01:49 AM

Sorry, I don't follow your numbers.  My chart shows the bias level to be 500 (at ISO 1600 though admittedly it might be different at ISO 200).  I don't know where 10xRN^2 comes into it.  The key point I was making is what is the average level of the sky glow above the bias. It is this sky glow that "protects" the thermal FPN from being spatially filtered.

 

Mark

The bias level is 600 in 16 bits, ISO independent.  Just like a Canon, they probably copied it, when they moved the black point up from zero.

 

10 X read noise squared is a target for the skyfog peak I (and others) use to determine the subexposure.  It makes the read noise insignificant compared to the sky noise.

 

Bottom line is that I (and others) usually set the subexposure so that the the sky glow above the bias is about 100 ADU in 14 bit, or 400 ADU in 16 bit.   Will that protect the thermal FPN from being spatially filtered?


Edited by bobzeq25, 16 January 2019 - 01:51 AM.


#7 whwang

whwang

    Mercury-Atlas

  • *****
  • Posts: 2735
  • Joined: 20 Mar 2013

Posted 16 January 2019 - 02:14 AM

My explanation might possibly go some way to explaining the results of your excellent study here:

How good do your DLSR darks need to be?

But I haven't seen any D800 long exposure darks so I don't know which variety of spatial filtering is being used on that particular camera.

 

I believe my tests were done with the firmware hack enabled.  So there shouldn't be any filtering.

 

I can provide my D800 darks to you to verify this.



#8 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 16 January 2019 - 03:32 PM

The bias level is 600 in 16 bits, ISO independent.  Just like a Canon, they probably copied it, when they moved the black point up from zero.

 

10 X read noise squared is a target for the skyfog peak I (and others) use to determine the subexposure.  It makes the read noise insignificant compared to the sky noise.

 

Bottom line is that I (and others) usually set the subexposure so that the the sky glow above the bias is about 100 ADU in 14 bit, or 400 ADU in 16 bit.   Will that protect the thermal FPN from being spatially filtered?

Sorry - my mistake.  Yes the bias level of the D5500 is 600 not 500 - I checked it in RawDigger.  But it's a 14 bit camera so I'm not exactly sure what you mean by "the bias level is 600 in 16 bits".

 

In any case, if you expose so that your sky glow reaches a level of 100 above the bias then the FPN values can also reach values up to 100 without being affected by spatial filtering.  The higher the sky glow, the higher the FPN values that are "protected" by the sky glow.

 

 

I believe my tests were done with the firmware hack enabled.  So there shouldn't be any filtering.

 

I can provide my D800 darks to you to verify this.

Yes please!

 

Mark



#9 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 16056
  • Joined: 27 Oct 2014

Posted 16 January 2019 - 03:52 PM

Sorry - my mistake.  Yes the bias level of the D5500 is 600 not 500 - I checked it in RawDigger.  But it's a 14 bit camera so I'm not exactly sure what you mean by "the bias level is 600 in 16 bits".

 

In any case, if you expose so that your sky glow reaches a level of 100 above the bias then the FPN values can also reach values up to 100 without being affected by spatial filtering.  The higher the sky glow, the higher the FPN values that are "protected" by the sky glow.

 

 

Yes please!

 

Mark

What I mean is if you look at the bias as a 14 bit number (which is the native representation), it's 150 ADU.  The conversion to 16 bits in some processing programs simply adds two trailing zeros, which makes it 600 ADU.  So the editing programs can report either value, depending.  In the case of PixInsight, you tell PI which scale to use, so it can report either.

 

In 14 bits the skyglow is just about 100 ADU above the bias in my subexposures.


Edited by bobzeq25, 16 January 2019 - 03:54 PM.


#10 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 16 January 2019 - 03:59 PM

What I mean is if you look at the bias as a 14 bit number (which is the native representation), it's 150 ADU.  The conversion to 16 bits in some processing programs simply adds two trailing zeros, which makes it 600 ADU.

That's incorrect. 

 

RawDigger confirms that the bias level is 600.  If you open a file that contains both black shadows and extreme highlights you will find the values range from around 470 to 16350 (double checked in RawDigger)   i.e. they are 14 bit numbers.  If you then scale them up to 16bits (i.e. multiply by 4) they will be in the range from around 1900 to 65000.

 

I think I know what's happening - you are misinterpreting values from PixInsight.  To see the true raw camera values in PixInsight you need to set the PixInsight data range to 16 bits.  You can easily confirm this by opening your data in RawDigger or in the freeware IRIS program.  Both use different raw convertors and both give the same result.

 

Mark


Edited by sharkmelley, 16 January 2019 - 04:32 PM.


#11 whwang

whwang

    Mercury-Atlas

  • *****
  • Posts: 2735
  • Joined: 20 Mar 2013

Posted 16 January 2019 - 09:10 PM

Hi Mark,

 

Here are tons of D800 darks with the firmware hack:

   https://drive.google...PElaB4Koc2jNDtm

Just pick those with the right ISO, temperature, and exposure time combination for your tests.

 

I do not have many unhacked darks.  This is a 3-minute dark at ISO800 and 5 degC ambient.

https://drive.google...P8i63npk8ZZF81e

This is a 10-minute dark at ISO200 and 10 degC ambient.

https://drive.google...IQ1UC2ONQGLebcD

 

Cheers,

Wei-Hao



#12 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 17 January 2019 - 02:46 AM

There is one important point I missed in the original post. 

 

Here is a crop from a typical raw master dark from a Nikon D5300:

 

NikonD5300Dark.png

 

The most obvious feature of this master dark is bright pixels occurring in pairs (sometimes triples).  These pixels survive the spatial filtering because they are "neighbours" and give one another "mutual protection".

 

The same thing will happen in the light frames - pairs of bright pixels will survive in the same locations.  Dark calibration will remove these bright pixel pairs.  For this reason, my earlier conclusion about dark subtraction being ineffective is not the whole story.  There is still benefit to be obtained from dark calibration, so it's not a complete waste of effort.

 

Of course the other effective way to remove them from the final image is to use dithered acquisition followed by sigma rejection during stacking of the star-aligned frames.

 

Mark


Edited by sharkmelley, 17 January 2019 - 05:40 AM.

  • xiga likes this

#13 xiga

xiga

    Viking 1

  • *****
  • Posts: 509
  • Joined: 08 Oct 2012
  • Loc: Northern Ireland

Posted 17 January 2019 - 08:34 AM

There is one important point I missed in the original post. 

 

Here is a crop from a typical raw master dark from a Nikon D5300:

 

attachicon.gif NikonD5300Dark.png

 

The most obvious feature of this master dark is bright pixels occurring in pairs (sometimes triples).  These pixels survive the spatial filtering because they are "neighbours" and give one another "mutual protection".

 

The same thing will happen in the light frames - pairs of bright pixels will survive in the same locations.  Dark calibration will remove these bright pixel pairs.  For this reason, my earlier conclusion about dark subtraction being ineffective is not the whole story.  There is still benefit to be obtained from dark calibration, so it's not a complete waste of effort.

 

Of course the other effective way to remove them from the final image is to use dithered acquisition followed by sigma rejection during stacking of the star-aligned frames.

 

Mark

Interesting discussion this. And great analysis Mark!

 

I use a D5300 myself. I always dither and stack using sigma rejection. Consequently, when I tested using Darks I saw no discernible difference, so I don't use Darks in my pre-processing routine.

 

I was thinking about Darks though, and had a (probably crazy) idea. Just wanted to throw it out there as you guys know a heck of a lot more about all this stuff than I do. Let's use the D5300 as the example case.

 

So we know that traditional Dark subtraction doesn't really work. But what does work? Surely the LENR function does, right?

 

So what I'm wondering is, what if when you finish your sequence, you take one final un-dithered Light, but with LENR turned on. Then would it not be possible (for some smart person) to compare the Penultimate Light to the Final (LENR) Light in software. Then surely the only difference between the two should be the true noise that we would ideally want to remove. Do you think the result of this could turn out to be of any use, and potentially be used to calibrate all of the lights?

 

Obviously this doesn't take into account the effects of temperature. But it would still be interesting to see if it added anything good.

 

Thoughts???



#14 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 16056
  • Joined: 27 Oct 2014

Posted 17 January 2019 - 02:12 PM

Interesting discussion this. And great analysis Mark!

 

I use a D5300 myself. I always dither and stack using sigma rejection. Consequently, when I tested using Darks I saw no discernible difference, so I don't use Darks in my pre-processing routine.

 

I was thinking about Darks though, and had a (probably crazy) idea. Just wanted to throw it out there as you guys know a heck of a lot more about all this stuff than I do. Let's use the D5300 as the example case.

 

So we know that traditional Dark subtraction doesn't really work. But what does work? Surely the LENR function does, right?

 

So what I'm wondering is, what if when you finish your sequence, you take one final un-dithered Light, but with LENR turned on. Then would it not be possible (for some smart person) to compare the Penultimate Light to the Final (LENR) Light in software. Then surely the only difference between the two should be the true noise that we would ideally want to remove. Do you think the result of this could turn out to be of any use, and potentially be used to calibrate all of the lights?

 

Obviously this doesn't take into account the effects of temperature. But it would still be interesting to see if it added anything good.

 

Thoughts???

This is pretty simple.  Everything sharkmelley said about conventional darks also applies to LENR.  That's just taking more darks more often.  Can be helpful in matching temperatures (although almost no one thinks it's worth the cost in night sky time), but doesn't do a darn thing for this issue.  <smile>.

 

Dithering is a great idea, so is taking enough subs for sophisticated pixel rejection techniques.


Edited by bobzeq25, 17 January 2019 - 02:14 PM.

  • xiga likes this

#15 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 17 January 2019 - 03:42 PM

This is pretty simple.  Everything sharkmelley said about conventional darks also applies to LENR.  That's just taking more darks more often.  Can be helpful in matching temperatures (although almost no one thinks it's worth the cost in night sky time), but doesn't do a darn thing for this issue.  <smile>.

 

Dithering is a great idea, so is taking enough subs for sophisticated pixel rejection techniques.

The problem with the dark subtraction is that a spatially filtered dark (or master dark) is being subtracted from a spatially filtered light.  I would expect that in-camera LENR would do it properly i.e. subtract a non-filtered dark from the non-filtered light.

 

Anyway I'm running a few tests that should either verify or refute my theory on spatially filtered dark subtraction.  At the same time, I'll check what LENR does.

 

Mark


  • xiga and bobzeq25 like this

#16 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 17 January 2019 - 07:19 PM

Here's a description of the experiment. 

 

I put a white diffuser over the lens of the Nikon D5300 camera and put the camera in a fairly dark room.  I then took the following exposures:

  • Light frames:           30 x 2min at ISO 400
  • Dark frames:           30 x 2min at ISO 400
  • LENR light frames: 15 x 2min at ISO 400  (the battery died before reaching 30 exposures)

I did the following processing:

  • Sum the light frames to create a master light
  • Sum the dark frames to create a master dark
  • Subtract the master dark from the master light
  • Sum the LENR light frames to create a LENR master light

I extracted the red channel of each of the above (just for easy display purposes). Crops of the resulting (linear) raw  frames, with the brightness scaled up by an identical linear multiplier are here:

 

NikonD5300_DarkSubtraction.jpg

 

Some obvious features can be seen:

  • The master light has plenty of isolated bright pixels i.e. the thermal fixed pattern noise (FPN)
  • In the master dark almost all of the thermal FPN has been capped, except for pairs of pixels that protect each other from the spatial filtering
  • Subtracting the master dark from the master light removes those bright pixel pairs but leaves most of the FPN in the image
  • By contrast, the in-camera LENR does an excellent job of removing the thermal FPN

In conclusion, the D5300 dark subtraction behaves pretty much as I predicted i.e. it's pretty ineffective at removing FPN.  I expect the D5500 and the D5600 to behave the same way.  But we shouldn't jump to the conclusion that dark subtraction is ineffective on all cameras that spatially filter the raw files.  In any case, dithered acquisition combined with sigma stacking is a good workaround.

 

To really understand the effects of spatial filtering on astro-imaging, some kind of taxonomy of the various spatial filtering algorithms is required.  We could then answer questions such as:

  • Does the algorithm eat whole stars?
  • Does it turn stars green or pink or does it leave star colour almost unaffected?
  • Does it render dark subtraction ineffective?  If so, under what conditions?

Mark


Edited by sharkmelley, 17 January 2019 - 07:31 PM.

  • xiga and otoien like this

#17 otoien

otoien

    Sputnik

  • -----
  • Posts: 25
  • Joined: 15 Jan 2019
  • Loc: Fairbanks, Alaska

Posted 17 January 2019 - 08:04 PM

Very interesting results, Mark. I like your empirical approach to this. So not surprisingly the dark frame subtraction works on unmodified raw data. However is it true that it will add more noise in the final stack, as it subtracts a single frame, not an averaged frame as when using darks (assuming the dark is done at the correct temperature) ?

 

A second added thought is that it would have been very interesting to see the same experiment repeated on one of the bodies that has the 24 neighbors version of the spacial filter (D500/D850/D810a/D810/Z6/Z7) and possibly also the slightly gentler version of the 8/12 neighbors version that D7500 has.


Edited by otoien, 17 January 2019 - 08:28 PM.


#18 sharkmelley

sharkmelley

    Vanguard

  • *****
  • topic starter
  • Posts: 2183
  • Joined: 19 Feb 2013

Posted 18 January 2019 - 01:58 AM

Very interesting results, Mark. I like your empirical approach to this. So not surprisingly the dark frame subtraction works on unmodified raw data. However is it true that it will add more noise in the final stack, as it subtracts a single frame, not an averaged frame as when using darks (assuming the dark is done at the correct temperature) ?

 

A second added thought is that it would have been very interesting to see the same experiment repeated on one of the bodies that has the 24 neighbors version of the spacial filter (D500/D850/D810a/D810/Z6/Z7) and possibly also the slightly gentler version of the 8/12 neighbors version that D7500 has.

Yes the in-camera dark subtraction (LENR) works well for removing the thermal FPN.  But the D5300 LENR also applies some strong filtering to the dark subtracted image.  Therefore D5300 LENR is unlikely to be suitable for images containing small tightly focused stars.  As you say, it also increases overall noise by including two helpings of read noise in a raw image file.  But that will not be noticeable if the sky glow sufficiently swamps the read noise.

 

It would be definitely interesting to see the same experiment on other cameras, including Sony.  The D5300 has a particular type of spatial filtering whereby the capping threshold has a slope of 2 (see chart in post #1).  For the Sony and some other Nikons the slope of the capping threshold is 1.  Dark subtraction might still work well on those cameras.

 

As I indicated earlier, there is a whole taxonomy of spatial filtering algorithms and each one has different effects, both on star colour and on efficacy of dark subtraction.  For some of those algorithms it is easy to make definite predictions whilst for others it is not.

 

Mark


Edited by sharkmelley, 18 January 2019 - 02:00 AM.

  • xiga likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics