•

# AA filter, spatial filter and star colours

164 replies to this topic

### #26 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 11 October 2018 - 12:34 AM

I have now worked out what the thresholds are for the D5300 and the D810A, following some excellent work by Bernard Delley on the D810:

https://www.dpreview...s/post/54364266

For each pixel, the maximum of the 24 surrounding pixels is taken.  Delley calls this number M24.

A pixel value is considered to be an outlier if its value exceeds a multiplier of M24.  These outliers have their value replaced by the maximum of the pixel's same colour neighbours. The thresholds can be easily determined by calculating (pixel_value-1)/M24 for every single pixel in a dark frame and taking the maximum.  The values very close to the edges of the sensor are ignored because we can't be sure of the edge effect behaviour of the Nikon algorithm.

[Later Edit:   ****PLEASE IGNORE THESE THRESHOLDS - THEY ARE NOT CORRECT****]

It shows the following for the D5300:

• Red threshold =    7.5*M24 + 1

• Green threshold = 7.5*M24 + 1

• Blue threshold =    7.5*M24 + 1

It also shows the following for the D810A:

• Red threshold =    4.0*M24 + 1

• Green threshold = 9.0*M24 + 1

• Blue threshold =    12.0*M24 + 1

[Later Edit:   ****PLEASE IGNORE THE THRESHOLDS ABOVE - THEY ARE NOT CORRECT****]

It is quite remarkable that very nice round numbers are used for those multipliers.  On the D810A, note the very big difference between 4.0 and 12.0 - the threshold multipliers for the red and blue channels respectively.  Why is this?  To answer that question, I refer back to one of Wei-Hao's posts from three years ago:

https://www.cloudyni...view/?p=6672325

In that post he notes that the gain on the D810A red channel is very much lower than the blue channel.  Therefore the threshold on the red channel needs to be very much lower otherwise too few red pixels will be considered outliers.

In summary there really is a quantitative difference between the D5300 and the D810A and therefore a difference in their effect on stars.

Mark

Edited by sharkmelley, 11 October 2018 - 02:04 PM.

• whwang, Jon Rista and tkottary like this

### #27 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 11 October 2018 - 02:11 PM

It seems I have goofed.

The above thresholds for the D5300 and D810A led to a testable prediction that proved false.  It doesn't actually explain why white stars turn green.  For instance on the D5300 it would require a single pixel in a star to be at least 7.5x brighter than its neighbours before it is "zapped" by the spatial filtering.  But in my test images of dots on a computer screen it was never the case.  But I still saw pixels being zapped.  Nevertheless, the multiplier of 7.5 does come into the equation somewhere ...

Further analysis required

Mark

Edited by sharkmelley, 11 October 2018 - 02:15 PM.

### #28 AtmosFearIC

AtmosFearIC

Apollo

• Posts: 1,315
• Joined: 10 Dec 2015
• Loc: Melbourne

Posted 13 October 2018 - 01:53 AM

All of this just compels me even further that getting a QHY367 is the right way forward for wide field OSC!

### #29 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 13 October 2018 - 04:19 AM

A different method of diagnosis has allowed me to see what's going on with the Nikon D5300.

Here is the earlier image 0.2 sec exposure of the dots on a computer screen at 600% scale so the pixels can be seen:

It's the raw undebayered data but I've indicated the pixels whose value is identical to the maximum of its same colour neighbours.  There's nothing special to see, just a few pixels whose values match by random chance.

The spatial filtering kicks in at 0.25sec, so here is the 0.25sec exposure:

Look at the remarkable difference!  Every single "star" has pixels that have been affected by the algorithm i.e. pixels whose value has been reduced to the maximum of its same colour neighbours.

I've put the original files here if you want to download them:

It has also allowed me clarify the algorithm.  The threshold of 7.5x was correct but it is applied to the maximum value of the neighbours of the same colour, not the max value of the 24 neighbours.

It shows that the spatial filtering algorithm for the D5300 is the following:

• Red pixels:      If (R-600) > 7.5*(max(R8)-600)   then set R=max(R8)
• Green pixels:  If (G-600) > 7.5*(max(G12)-600) then set G=max(G12)
• Blue pixels:     If (B-600) > 7.5*(max(B8)-600)    then set R=max(B8)

Where:

R8 means the 8 red neighbours

G12 means the 12 green neighbours

B8 means the 8 blue neighbours

The figure of 600 is the bias - the algorithm is applied to bias subtracted values which is something I didn't clarify earlier.

I'm busy this weekend so I won't have much time for further work on this.  The next stage will be to apply the same diagnostic tool to a real astro-imaging exposure, to identify stars that have been potentially affected by the spatial filtering.

The other big unanswered question is whether or not the algorithm does actually turn stars predominately green.  The affected stars certainly have their colour altered quite significantly but if we average this effect over hundreds or thousands of randomly placed stars, is there a generalised colour shift towards green?

Mark

• whwang and tkottary like this

### #30 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 14 October 2018 - 03:41 AM

Here is an example of the earlier diagnostic tool applied to a real image taken with a 50mm lens at f/8.  As before, I've indicated all the pixels whose value is identical to the maximum of its same colour neighbours.  The scale is 400%:

Some of these matches will be caused by random chance, others by genuine hot pixels being eliminated and others by bright pixels within a star.  Unfortunately there is no easy way to distinguish these cases.

In any case, here is the resulting star field, debayered, white balanced and background subtracted:

By examining the green stars in this image and comparing them to the previous image, it can be seen that the cause of almost every green star is the destruction of its red and/or blue pixels.  To be fair I deliberately chose a crop where there were a large number of green stars.  It is not representative of the image as whole.

I've put the original image here:

Now let's apply the same diagnostic procedure to Wei-Hao's image:

It's startling how few pixels have been affected. Is this because the D810A algorithm is less aggressive than the D5300 algorithm?  Maybe this can be answered by applying the reverse engineered D5300 algorithm to the D810A raw data.  I'll try to do this.

Mark

Edited by sharkmelley, 14 October 2018 - 03:43 AM.

### #31 fmeschia

fmeschia

Surveyor 1

• Posts: 1,675
• Joined: 20 May 2016
• Loc: Mountain View, CA

Posted 14 October 2018 - 12:19 PM

I noticed your D5300 has firmware 1.02. I have a copy with firmware 1.03, if you can share the dot pattern you use I'd be glad to try with my version if you're interested.

Francesco

### #32 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 14 October 2018 - 12:54 PM

I noticed your D5300 has firmware 1.02. I have a copy with firmware 1.03, if you can share the dot pattern you use I'd be glad to try with my version if you're interested.

Francesco

I have put it in the same folder:

Mark

### #33 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 14 October 2018 - 01:22 PM

I tried applying the reverse engineered D5300 algorithm to Wei-Hao's image but it made hardly any difference.  An additional 3600 pixel values were affected, almost entirely red and blue pixels.  The following crop is representative of the effects:

The pixels I've coloured red and blue are the ones that were affected. It had an effect on the colour of the stars impacted but across the image as a whole the effect was pretty minor.

I've also had a chance to examine more of the D810A darks that Wei-Hao supplied.  It has allowed me to refine the values of the threshold multipliers to 12,10 and 12.  They might even be slightly higher.  The problem is that relatively few pixels are affected by the spatial filtering (compared with the D5300 for instance) and this makes it more difficult to determine the exact threshold values.

Assuming those thresholds are more or less correct, the spatial filtering algorithm for the D810A is the following:

• Red pixels:      If (R-600) > 12*(max(N24)-600) then set R=max(R8)
• Green pixels:  If (G-600) > 10*(max(N24)-600) then set G=max(G12)
• Blue pixels:     If (B-600) > 12*(max(N24)-600) then set R=max(B8)

Where:

N24 means the 24 neighbours of any colour (in a 5x5 rectangle)

R8 means the 8 red neighbours

G12 means the 12 green neighbours

B8 means the 8 blue neighbours

The figure of 600 is the bias.

As an experiment I also tried applying this reverse engineered algorithm to the D5300 0.2sec image of dots i.e. where the D5300 applies no spatial filtering.  It had no effect at all on those pseudo stars but it did filter a large number of pixels in the dark background (the room in which the laptop sits).

It seems to me that the D810A algorithm is effective at removing noise from dark frames and shadow areas of real images but it leaves stars untouched.  This is exactly what an astro-imager wants!

Mark

### #34 fmeschia

fmeschia

Surveyor 1

• Posts: 1,675
• Joined: 20 May 2016
• Loc: Mountain View, CA

Posted 14 October 2018 - 01:56 PM

I have put it in the same folder:

Mark

Thanks a lot!

I can't replicate the green stars phenomenon, though. See attached a picture taken through a 55 mm lens, 0.25s at f/11, debayered via bilinear algorithm, white balanced as per your suggestion and then background subtracted. Do you think my lens/focusing is not sharp enough to trigger the issue?

Francesco

### #35 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 14 October 2018 - 02:44 PM

I can't replicate the green stars phenomenon, though. See attached a picture taken through a 55 mm lens, 0.25s at f/11, debayered via bilinear algorithm, white balanced as per your suggestion and then background subtracted. Do you think my lens/focusing is not sharp enough to trigger the issue?

Francesco

I'm glad someone is trying to replicate this!  Peer review is always important.

Your camera is far too close to the screen displaying the dots.  This means that the image of each dot covers multiple sensor pixels.  The dots need to appear 10 pixels apart (or slightly less) in your image.  Then dots have a chance of triggering individual pixels.

If your lens is sharp enough, reduce down to f/5.6 or less so diffraction effects don't increase the size of the "stars" in the image.

Mark

Edited by sharkmelley, 14 October 2018 - 04:07 PM.

### #36 fmeschia

fmeschia

Surveyor 1

• Posts: 1,675
• Joined: 20 May 2016
• Loc: Mountain View, CA

Posted 14 October 2018 - 03:09 PM

Thank you so much for the advice. I repeated the test by moving the camera farther away: now the dot pitch is about 10 pixel, and I can successfully replicate the issue.

See below two pictures with the same aperture (f/8) and ISO (100), the first one with a shutter speed of 1/5s, the second with 1/4s:

Edited by fmeschia, 14 October 2018 - 03:09 PM.

### #37 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 14 October 2018 - 04:04 PM

Thank you so much for the advice. I repeated the test by moving the camera farther away: now the dot pitch is about 10 pixel, and I can successfully replicate the issue.

See below two pictures with the same aperture (f/8) and ISO (100), the first one with a shutter speed of 1/5s, the second with 1/4s:

That's an excellent example of the effect!

The change of exposure from 1/5s to 1/4s has created a multi-coloured set of green, yellow, cyan, pink and purple stars.

Mark

### #38 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 17 October 2018 - 12:24 AM

It seems that reverse engineering the spatial filtering algorithm is actually much more difficult than I had hoped.

My existing hypothesis for the D5300 is the following:

• Red pixels:      If (R-600) > 7.5 * (max(R8) - 600)   then set R=max(R8)
• Green pixels:  If (G-600) > 7.5 * (max(G12) - 600) then set G=max(G12)
• Blue pixels:     If (B-600) > 7.5 * (max(B8) - 600)    then set B=max(B8)

However, when I apply this to real data e.g. my non-spatially filtered image of dots on a computer screen then far too few pixels (i.e. "stars") are actually affected compared with the number seen in the real spatially filtered version.  In other words the multiplier of 7.5 is too large to reflect what is happening in reality.

Maybe using the bias of 600 is wrong?  Maybe I should use 588 instead?  The significance of 588 is that it's the lower clipping level of the data and the histogram of a typical dark frame has a big peak of clipped values at 588.  Using an offset of 588 brings the multiplier down to exactly 2.0, when calibrated against dark frames i.e.

Red pixels:      If (R-588) > 2.0 * (max(R8) - 588)   then set R=max(R8)
Green pixels:  If (G-588) > 2.0 * (max(G12) - 588) then set G=max(G12)
Blue pixels:     If (B-588) > 2.0 * (max(B8) - 588)    then set B=max(B8)

The multiplier of 2.0 seems to agree much better with the number of affected pixels in the "dots on a computer screen" image. But if the multiplier of 2.0 is correct then there are other formulae that also fit the dark frames quite well.  For instance the original bias of 600 can still be used if an offset of, say 16, is also included:

Red pixels:      If (R-600) > 2.0 * (max(R8) - 600) +16  then set R=max(R8)
Green pixels:  If (G-600) > 2.0 * (max(G12) - 600) + 16 then set G=max(G12)
Blue pixels:     If (B-600) > 2.0 * (max(B8) - 600) + 16   then set B=max(B8)

The question is how to distinguish the correct one amongst a large number of possible candidates?

Mark

Edited by sharkmelley, 17 October 2018 - 12:30 AM.

### #39 bobzeq25

bobzeq25

ISS

• Posts: 27,065
• Joined: 27 Oct 2014

Posted 21 October 2018 - 10:54 AM

To be honest I don't think many people notice their star colours at all.  Probably because typical processing workflows bleach star colour.

To say that "typical workflows" bleach color is an oversimplification that could lead to msunderstanding.  It is more accurate to say that stretching an image inevitably bleaches color, and it pretty much always needs adjustment to restore it to a level most find "natural".

I think there are paths other than reverse engineering to fix this particular issue.

Much more relevant to that statement here, in the other thread mentioning this issue.  Referenced rather than copied, to avoid "crossposting".

https://www.cloudyni...w/#entry8903647

Edited by bobzeq25, 21 October 2018 - 10:57 AM.

### #40 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 21 October 2018 - 12:48 PM

To say that "typical workflows" bleach color is an oversimplification that could lead to msunderstanding.  It is more accurate to say that stretching an image inevitably bleaches color, and it pretty much always needs adjustment to restore it to a level most find "natural".

I think there are paths other than reverse engineering to fix this particular issue.

Much more relevant to that statement here, in the other thread mentioning this issue.  Referenced rather than copied, to avoid "crossposting".

https://www.cloudyni...w/#entry8903647

It's not accurate to say that stretching inevitably bleaches colour.  The whole raison d'etre of colour preserving stretches (e.g. PixInsight ArcsinhStretch) is to maintain RGB ratios in every pixel during the stretch. Even Photoshop does the same thing when it opens a raw file and transforms it into a colour space for display (e.g. sRGB or AdobeRGB).  Sure it adds a whole load of colour saturation, contrast etc. because that's what their consumer artists want (instead of colour accuracy) but at its basic level it's stretching the data in a colour preserving manner.

The point of reverse engineering Nikon's spatial filtering is not to fix the issue because that's impossible without hacking the firmware.  But the aim is to better understand and predict its effects.  With that knowledge it might be possible to design strategies we can use during acquisition and/or processing to ameliorate the issue.

Mark

• Jon Rista likes this

### #41 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 24 October 2018 - 05:04 PM

I've made a bit of progress on decoding the algorithm.  The case of the D5300 is now quite clear - the algorithm definitely uses a multiplier of 2.0 in the threshold as follows:

Red pixels:      If (R-588) > 2.0 * (max(R8) - 588)   then set R=max(R8)
Green pixels:  If (G-588) > 2.0 * (max(G12) - 588) then set G=max(G12)
Blue pixels:     If (B-588) > 2.0 * (max(B8) - 588)    then set B=max(B8)

If this algorithm is applied to a Nikon 810A star image then a large number of stars are attacked.  This demonstrates that the Nikon D810A algorithm is far less aggressive.  However it is now a lot less clear exactly what the D810A algorithm is doing.

Interestingly I also have also been given a dark from the Nikon D7500.  Again it is very different from the D5300 and appears to be less aggressive.  But it's far from clear what the D7500 is doing.  It's certainly not resetting the values of outlying pixels to the maximum of neighbours of the same colour - this is very easily detectable.

So Nikon seems to be changing their algorithm from camera to camera.

Mark

• tkottary likes this

### #42 whwang

whwang

Soyuz

• Posts: 3,886
• Joined: 20 Mar 2013

Posted 24 October 2018 - 08:33 PM

Hi Mark,

Is it possible that D810A first measure the brightness of the image to see if it receive sufficient exposure (nightscape, or deep-sky light frames), or it is severely under-exposed (astronomical dark frames), and then it adopts different algorithms according to the amount of exposure (or different coefficient in the same algorithm)?

### #43 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 25 October 2018 - 12:20 AM

Hi Mark,

Is it possible that D810A first measure the brightness of the image to see if it receive sufficient exposure (nightscape, or deep-sky light frames), or it is severely under-exposed (astronomical dark frames), and then it adopts different algorithms according to the amount of exposure (or different coefficient in the same algorithm)?

Anything is possible

However so far I haven't seen any evidence that leads me to conclude that a camera might apply a different algorithm in different cases.

I've posted a question over on DPReview because there are some experts on this kind of stuff over there.  There might be someone who has already investigated in detail.

Mark

### #44 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 27 October 2018 - 04:17 AM

Zero response so far over on DPReview so I'll present my new method of analysis here.

First a plea - I'm really interested to get hold of some long exposure darks from other Nikon cameras.  Ideally around ISO 800 for 3-5minutes at room temperature.  Noise reduction, long exposure noise reduction and high ISO noise reduction must be switched off.  Send me a PM if you can help.

Now back to the analysis ...

It has been long established that the Nikon algorithm alters the value of an outlier pixel by using the values of its neighbours - typically the maximum value of those neighbours is used.  My latest diagnostic method is to create a 2D plot of pixel value against the maximum of its neighbours for all the millions of pixels in the dark frame.  In the absence of spatial filtering, here is the kind of result we would expect to find:

However, for the Nikon D5300 we find this instead:

What is going on?  There is a very distinct boundary that has chopped off a complete "arm" of the plot, compared with the Canon.  The boundary is easy to determine, so I've added a pink line to show it clearly.  This pink line has a slope of precisely 2.  In addition there is a very strong alignment of pixel values along a line with slope=1.  In fact 5% of all pixels in the dark frame sit on that line with slope=1 i.e. the pixel value matches the maximum value of its neighbours.

There are no remaining pixels whose value exceeds twice the maximum of its neighbours (using a bias level of 588).  It's clear that those outlier pixels have had their value truncated to match the maximum of its neighbours.  This is the algorithm I described earlier in this thread.

Now let's look at the Nikon D7500:

In this chart I've used a heat map to count the pixels occupying the same position on the 2D plot:

Black represents                count < 10
Green represents     10 <= count < 100
Yellow represents   100 <= count < 1000
Yellow represents 1000 <= count < 10000
White represents 10000 <= count

This looks quite different from the D5300.  The pink line again has slope=2 and goes through the black level of 400.  But the alignment of pixel values along a line with slope=1 has disappeared.  Also the pink line is no longer a sharp cut off but it has still clearly removed a complete "arm" of the plot, compared with the Canon.

What I think is happening here is that the pink line with slope=2 does still represent a sharp cut off and that all pixels above that line (i.e. those with outlier values) have their value truncated.  But instead of replacing the value with the maximum of the neighbours the value is replaced with a new value just above the pink line.  With the D5300, 5% of pixels ended up on that line with slope=1 but with the D7500 statistical analysis shows that those 5% of pixels end up just above the pink line.

My hypothesis can be explained by an example.  Take a pixel whose original value exceed the pink boundary line by 160.  I think its replacement value is 160/8 above the pink boundary.  A factor of 16 or some other factor might be used instead - it's very difficult to determine  In any case, all pixels whose value originally fell above that line are pushed towards that line by the algorithm.

Now let's look at the Nikon 810A:

This plot looks different again and I haven't quite worked out what is going on.  One thing that's certain is that each pixel is being compared to the maximum of 24 neighbours.  One odd thing about the D810A is that in the raw file it is clear that very different digital scaling has been applied to each channel.  It's possible this might have a bearing.  I'll do some more channel by channel analyses.

Just out of interest, for comparison purposes, here is a 2D chart of pixels from a Sony A7RIII astro-image:

Note that the line with slope=1 forms a very strong cut off with very few exceptions violating this "rule".  Statistics show a full 10% of all the pixels in that astro-image fall precisely on that line.  Clear evidence of a quite destructive filter.

Back to Nikon - what about the effect of the spatial filtering on Nikon star colours?  It certainly looks like the D7500 spatial filtering will have much less effect on star colours than the D5300.  We have also determined that stars are almost completely unaffected by whatever spatial filtering the D810A performs.

So if you are worried about star colours, initial results are beginning to show that some Nikons are definitely better than others.

Mark

P.S. I'm really interested to get hold of some long exposure darks from other Nikon cameras

Edited by sharkmelley, 27 October 2018 - 06:12 AM.

• Jon Rista and VincentD like this

### #45 SandyHouTex

SandyHouTex

Fly Me to the Moon

• Posts: 6,387
• Joined: 02 Jun 2009
• Loc: Houston, Texas, USA

Posted 27 October 2018 - 10:16 AM

Zero response so far over on DPReview so I'll present my new method of analysis here.

First a plea - I'm really interested to get hold of some long exposure darks from other Nikon cameras.  Ideally around ISO 800 for 3-5minutes at room temperature.  Noise reduction, long exposure noise reduction and high ISO noise reduction must be switched off.  Send me a PM if you can help.

Now back to the analysis ...

It has been long established that the Nikon algorithm alters the value of an outlier pixel by using the values of its neighbours - typically the maximum value of those neighbours is used.  My latest diagnostic method is to create a 2D plot of pixel value against the maximum of its neighbours for all the millions of pixels in the dark frame.  In the absence of spatial filtering, here is the kind of result we would expect to find:

However, for the Nikon D5300 we find this instead:

What is going on?  There is a very distinct boundary that has chopped off a complete "arm" of the plot, compared with the Canon.  The boundary is easy to determine, so I've added a pink line to show it clearly.  This pink line has a slope of precisely 2.  In addition there is a very strong alignment of pixel values along a line with slope=1.  In fact 5% of all pixels in the dark frame sit on that line with slope=1 i.e. the pixel value matches the maximum value of its neighbours.

There are no remaining pixels whose value exceeds twice the maximum of its neighbours (using a bias level of 588).  It's clear that those outlier pixels have had their value truncated to match the maximum of its neighbours.  This is the algorithm I described earlier in this thread.

Now let's look at the Nikon D7500:

In this chart I've used a heat map to count the pixels occupying the same position on the 2D plot:

Black represents                count < 10
Green represents     10 <= count < 100
Yellow represents   100 <= count < 1000
Yellow represents 1000 <= count < 10000
White represents 10000 <= count

This looks quite different from the D5300.  The pink line again has slope=2 and goes through the black level of 400.  But the alignment of pixel values along a line with slope=1 has disappeared.  Also the pink line is no longer a sharp cut off but it has still clearly removed a complete "arm" of the plot, compared with the Canon.

What I think is happening here is that the pink line with slope=2 does still represent a sharp cut off and that all pixels above that line (i.e. those with outlier values) have their value truncated.  But instead of replacing the value with the maximum of the neighbours the value is replaced with a new value just above the pink line.  With the D5300, 5% of pixels ended up on that line with slope=1 but with the D7500 statistical analysis shows that those 5% of pixels end up just above the pink line.

My hypothesis can be explained by an example.  Take a pixel whose original value exceed the pink boundary line by 160.  I think its replacement value is 160/8 above the pink boundary.  A factor of 16 or some other factor might be used instead - it's very difficult to determine  In any case, all pixels whose value originally fell above that line are pushed towards that line by the algorithm.

Now let's look at the Nikon 810A:

This plot looks different again and I haven't quite worked out what is going on.  One thing that's certain is that each pixel is being compared to the maximum of 24 neighbours.  One odd thing about the D810A is that in the raw file it is clear that very different digital scaling has been applied to each channel.  It's possible this might have a bearing.  I'll do some more channel by channel analyses.

Just out of interest, for comparison purposes, here is a 2D chart of pixels from a Sony A7RIII astro-image:

Note that the line with slope=1 forms a very strong cut off with very few exceptions violating this "rule".  Statistics show a full 10% of all the pixels in that astro-image fall precisely on that line.  Clear evidence of a quite destructive filter.

Back to Nikon - what about the effect of the spatial filtering on Nikon star colours?  It certainly looks like the D7500 spatial filtering will have much less effect on star colours than the D5300.  We have also determined that stars are almost completely unaffected by whatever spatial filtering the D810A performs.

So if you are worried about star colours, initial results are beginning to show that some Nikons are definitely better than others.

Mark

P.S. I'm really interested to get hold of some long exposure darks from other Nikon cameras

Great analysis.  I think youâ€™re on to something.

### #46 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 01 November 2018 - 06:52 AM

I'm still working on this in the background, as I find time to do so.

However the crucial distinguishing factor is now becoming quite obvious.  The spatial filtering algorithm needs a threshold to determine which pixels are outliers.  It is these pixels that have their values capped.

Broadly speaking there are 2 ways of setting this threshold:

• (A)  Threshold is based on the maximum value of neighbouring pixels of the same colour as the pixel being examined
• (B)  Threshold is based on the maximum value of neighbouring pixels, regardless of colour

The cameras that use a type-A criterion are the most destructive to tightly focused small stars.

The cameras that use a type-B criterion will cause little or no destruction to stars.

Examples of cameras that use a type-A criterion (the most destructive to stars) are:

• Nikon D5300, Nikon D7500 plus all Sony mirrorless cameras

Examples of cameras that use a type-B criterion (almost no star destruction) are:

• Nikon D7000, Nikon D810A

Why is this a valid distinction in criteria?

To answer this question, consider the red and blue pixels in the sensor's colour filter array.  These are more widely spaced than green pixels and often there is only a single bright red or blue pixel in a small star.  Using criterion A, that pixel will be compared with its same colour neighbours - all of which have much lower values - so it will end up having its value capped, often very severely.  Using criterion B, the pixel will be compared to all its neighbours, which include other bright pixels in the star.  It is therefore far more likely to survive being capped.

Mark

Edited by sharkmelley, 01 November 2018 - 07:41 AM.

### #47 whwang

whwang

Soyuz

• Posts: 3,886
• Joined: 20 Mar 2013

Posted 01 November 2018 - 11:49 AM

Hi Mark,

Thanks for the wonderful work. However, your last paragraph does not answer why Nikon does two different things on different cameras. This especially puzzles me given that D7000 is a very old model while D810A is very new, and yet they share the same less disruptive/aggressive algorithm.

### #48 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 01 November 2018 - 12:58 PM

Hi Mark,

Thanks for the wonderful work. However, your last paragraph does not answer why Nikon does two different things on different cameras. This especially puzzles me given that D7000 is a very old model while D810A is very new, and yet they share the same less disruptive/aggressive algorithm.

Agreed - it's a real puzzle.  I really don't know why Nikon would change the algorithm from model to model.

However, it's not accurate to say the D7000 and D810A "share the same less disruptive/aggressive algorithm" because there are actually big differences between the two algorithms.  But they do share one important characteristic - which is what I call the "type-B" criterion.

I've looked at darks from 4 Nikon models and the algorithm used is different in each one.  Maybe if we looked at darks from a lot more models then we would start to see a pattern in which models use which algorithms.

Mark

Edited by sharkmelley, 01 November 2018 - 12:59 PM.

### #49 SandyHouTex

SandyHouTex

Fly Me to the Moon

• Posts: 6,387
• Joined: 02 Jun 2009
• Loc: Houston, Texas, USA

Posted 03 November 2018 - 12:40 PM

Agreed - it's a real puzzle.  I really don't know why Nikon would change the algorithm from model to model.

However, it's not accurate to say the D7000 and D810A "share the same less disruptive/aggressive algorithm" because there are actually big differences between the two algorithms.  But they do share one important characteristic - which is what I call the "type-B" criterion.

I've looked at darks from 4 Nikon models and the algorithm used is different in each one.  Maybe if we looked at darks from a lot more models then we would start to see a pattern in which models use which algorithms.

Mark

It would be helpful to compile a list based on model nos. for those looking to purchase a Nikon for AP use.

I have a couple of Nikons for daytime use.  How could I take a dark and send it to you?

The models I have are the D750, D7500, D3000, D200, D3400, and D5300.

### #50 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 5,239
• Joined: 19 Feb 2013
• Loc: UK

Posted 03 November 2018 - 01:05 PM

It would be helpful to compile a list based on model nos. for those looking to purchase a Nikon for AP use.

I have a couple of Nikons for daytime use.  How could I take a dark and send it to you?

The models I have are the D750, D7500, D3000, D200, D3400, and D5300.

That would be very useful to me!  I already have the D5300 but all the others would be good.

Switch off all forms of noise reduction: i.e. the settings for noise reduction, long exposure noise reduction and high ISO noise reduction.  Then take a 3min or 5min raw exposure at ISO 1600 at room temperature.

The best way to share it is to upload to a file-sharing site and send me the link in a PM.

Thanks!

Mark

## Recent Topics

 Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics