DSLR 'long exposure noise reduction'
Posted 26 April 2013 - 07:55 PM
Posted 26 April 2013 - 08:33 PM
So I guess the idea is pick a time interval, automatically shoot darks and classify the darks by temp as the night time temps dip each hour.
Posted 26 April 2013 - 11:47 PM
Posted 27 April 2013 - 08:01 AM
Posted 27 April 2013 - 10:39 AM
well, you have to be careful with the DCRAW settings. DCRAW in DSS can be configured to take the white balance info from the CR2 file, or it can try to automatically determine the white balance... or it can just ignore the white balance.
Found the settings you were talking about in DSS. Neither the "Use Auto White Balance" or the Use Camera White Balance" checkboxes were checked so DSS shouldn't have been adjusting the color channels of the RAW frames. Also, the "Set the black point to zero" box was also unchecked. So I'm still not sure why DSS treats the CR2 files differently from the FITS files. Both CR2 and FITS files used the same AHD debayering method. Still looks like something that the Canon software is doing? Unless there are other DCRAW setting that DSS doesn't expose.
Posted 27 April 2013 - 02:43 PM
Posted 05 May 2013 - 03:20 PM
Before anything else I found something that DOES effect the analysis up till now. The ICNR images had some transient drifting high cloud/haze!! I think this means the stacked results from the ICNR runs up till now have not been properly illustrative of the method's results. Thankfully though it turns out that 8 of the frames are clear of the clouds so we can still the comparisons just by picking those 8 best ICNR frames and reducing the number of the no-ICNR frames to match.
So here is what I did:
- Did all calibration and stacking in Nebulosity for consistency
- did a uniform batch-crop on all frames after align to entirely remove alignment artefacts on the edges prior to stacking
- Used "pixel binning" demosaic (aka superpixel debayer) so that no colour data is mixed in from neighbouring pixels (or at least this is true for Blue and Red pixels, for Green channel each pixel is combined data for two actual sensor sites)
- Used pure Average Stack method as we want to test the math/data here rather then test various stacking methods
- Also produced one stack with Standard Deviation 1.5 method to compair against more likely real world stacking methods (this not really relavent to the question at hand though)
- Used PixInsight's NoiseEvaluation script on each stacked file to pull our stats *before* any modification what so ever.
- Extracted the colour channels into separate files to produce forum-post images.
- Used PixInsight's HistrogramTransformation process to align the histogram peak to exactly the same place for each image by adjusting the black point only (no pixels clipped). After that applied an identical mid-point transform to each image. I think this should have given us an apples to apples comparison when visually appraising noise levels.
Stacks I created/evaluated:
Stack A) 8 exposures, Long Exposure Noise Reduction active, no darks, Average stacking
Stack B) 8 exposures, Long Exposure Noise Reduction off, 16 darks used, Average stacking
Stack C) 16 exposures, Long Exposure Noise Reduction off, 16 darks used, Average stacking
Stack D) 16 exposures, Long Exposure Noise Reduction off, 16 darks used, Standard Deviation 1.5 pixel rejection stacking
So before getting into my interpretation of the data, here is the raw data data itself as well as the center 200 pixels of the Blue channel of each frame. The data is a direct copy from PixInsight's ProcessConsole. The first value is the noise evaluation, the second two values are information about how the MRS noise evaluation method was working and can likely be ignored for our needs.
R = 4.510e-04, N = 1755244 (59.45%), J = 4
G = 3.920e-04, N = 1308019 (44.30%), J = 4
B = 6.509e-04, N = 1577837 (53.44%), J = 4
R = 3.940e-04, N = 1483531 (52.26%), J = 4
G = 3.539e-04, N = 1111487 (39.15%), J = 4
B = 5.815e-04, N = 1357521 (47.82%), J = 4
R = 2.879e-04, N = 1388363 (47.34%), J = 4
G = 2.526e-04, N = 972472 (33.16%), J = 4
B = 4.157e-04, N = 1235743 (42.14%), J = 4
R = 3.338e-04, N = 1536284 (52.38%), J = 4
G = 3.486e-04, N = 1244584 (42.44%), J = 4
B = 5.537e-04, N = 1502592 (51.23%), J = 4
(edited to indicate number of Darks used for each stack)
Posted 05 May 2013 - 03:33 PM
So alas this is not a true method-only test.
However I feel it is still worth doing this comparison given that one of the differences between ICNR and out of camera dark calibration is that you can use a different number of darks vs lights.
Stack A vs Stack B
aRed = 4.510e-04
bRed = 3.940e-04
aGreen = 3.920e-04
bGreen = 3.539e-04
aBlue = 6.509e-04
bBlue = 5.815e-04
So the out-of-camera darks frame is giving consistently lower noise. Alas it does not mean much other then "more darks are good!" I will try and re-do the A and B stacks but with equal numbers of darks soon(ish) so we can do a value noise level comparison.
One interesting thing to note is that the stack A seems to have done a poor job correcting for the dead pixel in the top-left quadrant of the image. Dithering and stddev/sigma/k-sigma stacking methods would help here but I do find it interesting to see that dead pixel is much less prominent in stack B.
Posted 05 May 2013 - 03:49 PM
This is the comparison that brings into focus one of the main arguments against in camera darks: You loose 1/2 time you could have been collecting photons.
aRed = 4.510e-04
cRed = 2.879e-04
aGreen = 3.920e-04
cGreen = 2.526e-04
aBlue = 6.509e-04
cBlue = 4.157e-04
Here there is no contest. Both statistically and visually the stack C wins by a large margin.
Once again we see that the cold pixel is less visually prominent in the dark-subtracted image.
Posted 05 May 2013 - 03:59 PM
As I indicated before, this is not *really* part of the topic, but I was curious and thought others might be as well.
For Stack D I used StdDev 1.5 for both creation of the MasterDark and for stacking of the lights themselves. Likely this was too strong of a factor and I am sure I could get some cleaner results paying with some variations (especially if I used PixInsight rather then nebulosity) but this will do for now.
cRed = 2.879e-04
dRed = 3.338e-04
cGreen = 2.526e-04
dGreen = 3.486e-04
cBlue = 4.157e-04
dBlue = 5.537e-04
Noise has gone back up - still better then both Stack A and Stack B and sure enough the background looks slightly crunchier.
That said though, the stars do seem slightly shaper and the cold pixel is MUCH reduced. Stack D *MAY* be a slightly more visually appealing image....
The conclusion I draw from this part of the exercise is that it is indeed important to be careful in selection of pixel rejection criteria. Better options may have helped to keep the extra sharpness and defect correction while not introducing quite as much noise.
Posted 05 May 2013 - 04:05 PM
Unfortunately due to my mistake while processing the 8-frame external-darks stack we have not addressed one of the key points of the pro-ICNR argument - that the noise removal is equally effective with equal numbers of ICNR frames vs external darks.
I also got to wondering... the 16 frame stack OBVIOUSLY had a better S/N ratio - but it got me curious. What if you had ICNR off and then where unable to take darks at all? As such in addition to the redo of stack B i am also going to do a stack of all 16 no-ICNR frames *without* applying the darks.
Thanks again for sharing the test data Keith!
Posted 05 May 2013 - 05:36 PM
Calibration/stacking/post-processing all done as before. Three stacks produced:
Stack X) 8 exposures, Long Exposure Noise Reduction on, no darks, Average stacking (same as previous Stack A)
Stack Y) 8 exposures, Long Exposure Noise Reduction off, 8 darks, Average stacking
Stack Z) 16 exposures, Long Exposure Noise Reduction off, no darks, Average stacking
Stats (per channel):
xR = 4.512e-04
yR = 4.230e-04
zR = 2.820e-04
xG = 3.924e-04
yG = 3.763e-04
zG = 2.467e-04
xB = 6.503e-04
yB = 6.191e-04
zB = 4.024e-04
Note that Stack X has essentially identical results to Stack A. This is exactly as it should be and a good sanity check against this run vs my previous run.
Stack X vs Stack Y:
Here we have the theory test case at last. Interestingly the out-of-camera darks *DOES* produce a slightly cleaner image. It is a fairly subtle difference visually but there now we do have numbers. With this dataset and this camera Long Exposure Noise Reduction (in-camera darks) does not produce an identical result to an equal number of darks taken separately. It is close to the same result though so as my previous Stack A vs Stack C comparison shows the shorter time under the sky is the dominant factor.
Stack X vs Stack Z vs Stack C:
This was mostly for my own curiosity. Partially the hot blue pixel here is totally uncorrected and any large scale pattern noise (banding, amp glow, etc) will *not* be corrected for. The noise levels in Stack Z and C are surprisingly close - this would seem to indicate to me that the 16-frame dark subtraction did a good job of removing pattern noise(unwanted signal) without introducing much additional random noise. That leaves the X vs Z image. Again, quite clearly the 16frame stack beats out the ICNR stack easily on a S/N basis. The question of if a Bias or bad-pixel-map image would be sufficient instead darks (be it in-camera or external) would likely be a per-camera-model question and not really have a generalized answer.
That "mean value" question from earlier:
It was noted earlier in the thread that the ICNR images seemed to have a higher mean value. I can confirm that Stack Z (where darks where not subtracted) had a similar mean to the ICNR stack, while Stack Y had the same lower-mean noted previously. From this I conclude that Canon's Long Exposure Noise Reduction calibration process is likely two step:
1) Subtrack dark from image
2) Add mean value entire dark frame to each pixel of the image to restore pixel brightness values
In other words the differing means is entirely a cosmetic difference that does not effect the image data.
I will continue to run without ICNR for my imaging sessions (but will keep ICNR in the back mind should I come across an odd edge case where darks are somehow otherwise impossible).
Posted 05 May 2013 - 05:59 PM
Confirms the basic theory nicely. A single dark leaves more random noise in the calibrated image than a master dark made from a much larger number of darks.
The other down side of ICNR is you can't capture the darks to build a dark library. Once captured a dark frame is potentially reuseable. With ICNR - poof! gone.
I will continue to use my 50+ frame master darks . I now know its totally worth it. Thanks
Posted 05 May 2013 - 10:13 PM
Posted 05 May 2013 - 11:03 PM
Thanks again for sharing the test data Keith!
Thank YOU for doing such a thorough analysis and for cleaning up my initial muddled attempt. Nice job! Goes to show what a little group collaboration can achieve. An excellent reference for anyone interested in this question.
Posted 06 May 2013 - 02:40 PM
The bias signal is subtracted 3 times during calibration (light, dark, flat). However, those who do ICNR are not able to subtract the bias signal from the ICNR dark, and are only subtracting the bias signal 2 times.
I suspect you'll find an even larger difference in SNR if you use bias and flats, but that's an even larger experiment.
Posted 06 May 2013 - 04:23 PM
There is another reason though. You talk about the Bias being subtracted 3 times but in reality the bias is only subtracted from the Lights ONCE and any other use of Bias frames is to make sure nothing re-adds the bias back in. Dark frames do contain the same Bias signal as separate Bias frames so doing dark subtraction does the job of removing the Bias signal from the Lights - you do not *need* to have separate bias frames.
There is of course an exception. Some programs (PixInsight for example) can do Dark Scaling - this attempts to automatically adjust the master dark to best match the dark-signal in the light for best possible correction. This is great for solving problems temperature variations BUT since Bias signal does not scale with temperature of exposure time you need to first remove the Bias from the Dark before doing the Dark Scaling. At this point you need to also separately subtract the bias from the light.
Flats also need the bias removed from them to avoid reintroducing it into the Lights, Flat-Darks *or* Bias frames do get used there. In this case we are not testing anything related to flats so there is no need to complicate the task at hand by using them.
So as relates to the actual original intention of the thread the lack of Bias frames is not a weakness of the ICNR method as that Bias data is included in the in-camera darks.
I suspect you'll find an even larger difference in SNR if you use bias and flats, but that's an even larger experiment
Actually use of flats would slightly decrease the S/N ratio as unless you are using a stupendously large number of Flats and Flat-Darks the flats will introduce a small amount of random noise. However the correction of vignetting and dust-shadows is VERY much worth the very small reduction in S/N ratio. Perhaps one could say you trade a bit of statistical S/N for a lot of *useable* S.
Posted 06 May 2013 - 09:25 PM
With this dataset and this camera Long Exposure Noise Reduction (in-camera darks) does not produce an identical result to an equal number of darks taken separately. It is close to the same result though so as my previous Stack A vs Stack C comparison shows the shorter time under the sky is the dominant factor.
unfortunately i think it's still impossible to prove or disprove the original assertion that sum of in-camera dark subtracted lights == sum of master-subtracted lights, only because you just can't use the same dataset for both stacks in the experiment. if the camera also handed you the dark it used to subtract the light frame we might be able to do something with that. the closest you can come is to make lights interleaved with darks with ICNR turned off and then manually do the calibration both ways. but this dataset was not constructed that way (right?)
i still think it's just intuitively false though.
Posted 06 May 2013 - 09:43 PM
So ya, truly conclusive results are not here, but I still take these results as a strong indicator.
Someone COULD set up a test rig, perhaps indoors in a very dark room so conditions can be controlled (ambient temp, no transient clouds, etc), to capture multiple data sets and REALLY prove it scientifically.... but I am not *THAT* motivated
Posted 06 May 2013 - 11:35 PM
the closest you can come is to make lights interleaved with darks with ICNR turned off and then manually do the calibration both ways. but this dataset was not constructed that way (right?)
Nope, but you also wouldn't get Canon's in-camera processing that way. This may not be conclusive but it's close enough for people to make their own decision on ICNR.
I think I'll continue to use bad pixel mapping as it should be at least as good as your stack without darks which turned out pretty good. I'll also continue to dither and use Std Dev based stacking (but based on this I'll probably set my rejection higher). As was mentioned very early on in the thread, there are many ways to skin the cat.
Posted 07 May 2013 - 12:24 AM