Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Is bit depth relevant?

  • Please log in to reply
18 replies to this topic

#1 Hajfimannen

Hajfimannen

    Explorer 1

  • -----
  • topic starter
  • Posts: 74
  • Joined: 30 Apr 2018
  • Loc: Sweden

Posted 18 July 2019 - 04:38 PM

Looking into different options for my first astro camera I've started to wonder if the bit depth are of any "real" value by it self?

The Full well capacity must be one of the more important/deciding factors if f ex 16 bit can be achieved, or?

 

If not, how can you achieve 16 bit with a FWC of 14 000? Surely it would have to be + 65 000 photons or levels.

 

Or is the 16 bit good to have in another aspect, such as some form of headroom?

 

 

 



#2 cfosterstars

cfosterstars

    Vanguard

  • *****
  • Posts: 2452
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 18 July 2019 - 05:19 PM

To some extent yes. But what really matters is the effective dynamic range of the camera. Consider that a CCD with higher read noise (16-bit) vs. a CMOS is low read noise (12-bit) can both have the same ~12 stops of effective dynamic range even with very different FWD. You can actually measure this with SharpCap Pro for your camera. It depends on the gain as well since for CMOS the read noise and FWD depend on the gain. I have no direct experience with CCDs but I believe that most have a fixed gain and read noise so they have a fixed effective dynamic range - but I could be generalizing.


  • Hajfimannen likes this

#3 Hajfimannen

Hajfimannen

    Explorer 1

  • -----
  • topic starter
  • Posts: 74
  • Joined: 30 Apr 2018
  • Loc: Sweden

Posted 18 July 2019 - 05:37 PM

Thanks cfosterstars.

I totally agree, but I have not seen anyone use: Effective Dynamic Range in their marketing. 

Is it an easy way to calculate the "true" bit depth if you know the Full well capacity and the read noise?

If a camera has 4e read noise. Does it mean it looses 2 bit on that end as well?

As pixels only receiving ≥ 4 electrons would loose them in the noise....


Edited by Hajfimannen, 18 July 2019 - 05:38 PM.


#4 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8694
  • Joined: 12 Oct 2007

Posted 18 July 2019 - 05:55 PM

Bit depth has little importance by itself as long as the gain is set so that the read noise is much greater than one bit - which is usually the case.  As a result, the lower bits in any signal measurement will be noise and won't contribute info.

 

That's where dynamic range becomes more useful - because it factors in the noise that limits how faint you can go - and divides into the strongest signal you can measure.

 

But even that doesn't matter much if you are doing astro imaging where you stack many frames.  You can increase the dyanamic range as much as you want by stacking many frames.

 

Where dynamic range is much more important is in single exposure DSLR imaging.  Anyone who has taken a raw image with a dslr and stretched it to bring out highlights and shadows can appreciate the role of dynamic range - but that is for a single exposure.  Even for terrestrial work, stacking many frames makes dynamic range less of an issue.

 

As long as you stack a decent number of frames, I don't think 12-bit would be a limitation for astro-imaging.  It's just something that people don't like to have as inherently limiting their exposure quality - even though it doesn't show in the final result.

 

Frank


Edited by freestar8n, 18 July 2019 - 05:59 PM.

  • Jim Waters and Hajfimannen like this

#5 cfosterstars

cfosterstars

    Vanguard

  • *****
  • Posts: 2452
  • Joined: 05 Sep 2014
  • Loc: Austin, Texas

Posted 18 July 2019 - 08:07 PM

Thanks cfosterstars.

I totally agree, but I have not seen anyone use: Effective Dynamic Range in their marketing. 

Is it an easy way to calculate the "true" bit depth if you know the Full well capacity and the read noise?

If a camera has 4e read noise. Does it mean it looses 2 bit on that end as well?

As pixels only receiving ≥ 4 electrons would loose them in the noise....

ZWO posts it with there cameras on their web site.

 

For the ASI1600MM-PRO it show DR for dynamic Range and this is the effective dynamic range. its a combination of the FWD and the read noise.

 

https://astronomy-im.../asi1600mm-cool

 

So for my LRGB filters I use a gain of 76 which is ~2e/ADU with a read noise of ~2.2e and with FWD of ~8K ADU and that translates to just under 12 stops of DR. You are sort of right that the 2e of read noise takes away part of the FWD, but you still get ~12 stops of DR.

 

If you were to look at an 8300-based CCD with a 16-bit A/D and higher read noise, you get just about exactly the same effective DR. But you have to integate longer to get over the 10e of read noise to get to the same S/N. You have to overcome 10e vs 2e of read noise between the cameras. But if used correctly, you get the same or close to the same performance even with the differences in FWD, readnoise and A/D resolution. Fewer longer subs with the CCD vs. more and shorter subs with the CMOS. The difference in quantization error is also evened out with stacking the subs.

 

I know I will get a whole lot of more correct answers that I have give but this is the gist of the story. 


  • Hajfimannen likes this

#6 Jim Waters

Jim Waters

    Mercury-Atlas

  • *****
  • Posts: 2630
  • Joined: 21 Oct 2007
  • Loc: Phoenix, AZ USA

Posted 18 July 2019 - 11:03 PM

Some good posts.  I am learning stuff.



#7 ks__observer

ks__observer

    Viking 1

  • *****
  • Posts: 982
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 19 July 2019 - 06:29 AM

Bit depth has little importance by itself as long as the gain is set so that the read noise is much greater than one bit - which is usually the case.  As a result, the lower bits in any signal measurement will be noise and won't contribute info.

I don't think it has to be "much" greater, but rather just 2x greater per Nyquist.

If your read noise is 4e, and set gain to 2e/ADU, you are samlling the the read noise at 2x.



#8 jgraham

jgraham

    ISS

  • *****
  • Posts: 20127
  • Joined: 02 Dec 2004
  • Loc: Miami Valley Astronomical Society

Posted 19 July 2019 - 08:51 AM

Early on I encountered serious problems with the 12 bit images from my 350D and 400D. The 12 bit data depth severely limited how aggressive I could get with processing as the images quickly posterized. This problem evaporated with the 450D and its 14 bit ADC. My initial problems with 12 bit ADCs caused me to be very reluctant to try the ASI1600MM-c because it also had a 12 bit ADC. However, as the 1600 developed a solid track record I took the chance an bought one. Much to my pleasant surprise, the 12 bit ADC was not a problem. The difference was usable dynamic range. With my DSLRs much of the lower end of the dynamic range was consumed by noise, squeezing my useful data up into the upper end of the dynamic range and effectively reducing the depth of my useful data. With the cooled 1600 the noise occupied only the low end of the dynamic range, freeing up a lot more space for data. Hence, no problems with a 12 bit ADC on a modern, cooled camera.

 

Note that you increase the effective bit depth by averaging images and saving the data into an expanded data range to capture the round-off. However I have always used about the same number of subs, so this benefit has always been about the same for my images.

 

Food for thought.


  • Jon Rista likes this

#9 ks__observer

ks__observer

    Viking 1

  • *****
  • Posts: 982
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 19 July 2019 - 09:01 AM

Early on I encountered serious problems with the 12 bit images from my 350D and 400D. The 12 bit data depth severely limited how aggressive I could get with processing as the images quickly posterized. This problem evaporated with the 450D and its 14 bit ADC. 

There may have been something else going on.

16 subs gets you 2 additional bits.  Most people are getting about that many subs at a minimum. 

Plus if you are shooting at above say about 800 ISO you are probably reducing the quant error further by reducing the e/ADU.

I am not sure your issue was from the 12-bit ADC.



#10 motab

motab

    Vostok 1

  • *****
  • Posts: 108
  • Joined: 01 Aug 2018
  • Loc: Copenhagen, Denmark

Posted 19 July 2019 - 09:50 AM

So dynamic range is the ratio or difference (however way you want measure) between the maximum useful output level and the noise floor, both in RMS. Something like the maximum signal-to-noise ratio. It's only related to full-well-capacity and read and dark noise. It's an analogue concept, not a digital one, at least if measured before quantization.

 

At read time, after an analogue amplificaton and offset, the signal is quantized (analogue-to-digital conversion) and spat out as some effectively integer quantity. The quantizer has both a lower and upper-bound and will clip from below and above and then all voltages between lower and upper are partitioned into equidistant 2^(bit-depth) steps. That procedure will introduce distortion distinct from the read and dark noise. The size of step (or the number of levels), is directly related to how much distortion is introduced. All quantizers distort in a way that is dependent on the number of steps used and the distribution of the input signal. All things being equal, a 16-bit ADC or quantizer will generate less of that distortion per sub than a 12-bit or 10-bit ADC.

 

To the point regarding stacking, if you only took 1-bit subs at unity gain for 60s, each pixel would tell you whether you were above or below (FWC/2) electrons in that period. No matter how many you stacked, the best outcome you could hope for is that you get a perfect estimate of "probability that that number of incident photons in 60s > FWC*0.5/QE". If you stacked 65536 subs, you would get that estimate with a 16-bit resolution but you have lost a lot of information in the process. It's not as clear when looking at less extreme examples but in the limit of a high number of fixed length subs, a 16-bit ADC will generate less distortion than a 12-bit ADC. For noisier cameras or brighter objects, the difference between many subs of one or the other will be less. 


  • Hajfimannen likes this

#11 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23369
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 July 2019 - 10:05 AM

To the point regarding stacking, if you only took 1-bit subs at unity gain for 60s, each pixel would tell you whether you were above or below (FWC/2) electrons in that period. No matter how many you stacked, the best outcome you could hope for is that you get a perfect estimate of "probability that that number of incident photons in 60s > FWC*0.5/QE". If you stacked 65536 subs, you would get that estimate with a 16-bit resolution but you have lost a lot of information in the process. It's not as clear when looking at less extreme examples but in the limit of a high number of fixed length subs, a 16-bit ADC will generate less distortion than a 12-bit ADC. For noisier cameras or brighter objects, the difference between many subs of one or the other will be less. 

As I understand it, as long as the noise is larger than half the LSB, you can average out the quantization error as long as you use high precision accumulators. The 1-bit example here would not be an example of such a case, since with 1 bit you could only measure the difference between signal in the lower half vs. upper half of the FWC range, and unless your noise was ~16k ADU (MASSIVE, in other words) you wouldn't really be able to average out the error and converge on a more accurate result.

 

With read noise of say 3.5e-, but an LSB of 4.88e-/ADU (i.e. Panasonic M, Gain 0), the read noise alone would also be insufficient to allow you to effectively average out the error. You either need additional noise (i.e. skyfog & dark current), or you need to use a higher gain to get the quantization error small enough to become smaller than the read noise.



#12 Hajfimannen

Hajfimannen

    Explorer 1

  • -----
  • topic starter
  • Posts: 74
  • Joined: 30 Apr 2018
  • Loc: Sweden

Posted 19 July 2019 - 02:41 PM

So dynamic range is the ratio or difference (however way you want measure) between the maximum useful output level and the noise floor, both in RMS. Something like the maximum signal-to-noise ratio. It's only related to full-well-capacity and read and dark noise. 

How about quantum efficiency. Doesn't that also affect D.R?

A camera with 80% QE and 5e read noise takes an exposure during which time only 10 photons hit the photosite. 

That would leave 8 photons, just enough to make it above the noise floor.

If we used a camera with a 50 % QE the remaining 5 photons that was registered would be lost in the Sea of noise. 

Then on another photosite a bright star would fill up the FWC to its brim to what ever value the camera has. 

Or, is D.R calculated for each pixel and not the entire sensor?


Edited by Hajfimannen, 19 July 2019 - 02:44 PM.


#13 ks__observer

ks__observer

    Viking 1

  • *****
  • Posts: 982
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 19 July 2019 - 03:17 PM

Dynamic range is not affected by QE or optical speed.

The photon hits a photosite (no different than a solar panel) and dislodges an electron.

The electrons get stored in a capacitor.

At the end of the exposure the camera reads the electrical voltage and converts that to a pixel value that Photoshop/PixInsight reads.

The capacitor can only hold so much electrical charge.

That is the full well capacity.

It does not matter if one photon hits per minute or 1,000 photons, eventually if shutter is left open long enough the system will saturate.


  • Hajfimannen likes this

#14 motab

motab

    Vostok 1

  • *****
  • Posts: 108
  • Joined: 01 Aug 2018
  • Loc: Copenhagen, Denmark

Posted 19 July 2019 - 03:25 PM

How about quantum efficiency. Doesn't that also affect D.R?

A camera with 80% QE and 5e read noise takes an exposure during which time only 10 photons hit the photosite. 

That would leave 8 photons, just enough to make it above the noise floor.

If we used a camera with a 50 % QE the remaining 5 photons that was registered would be lost in the Sea of noise. Or?

Lower QE at some wavelength doesn't change the FWC so DR isn't affected. It just means it takes a longer sub to fill the well.

 

In terms of noise drowning and so on, the sources of electrical and thermal noise are mostly assumed to be zero mean while the arrival process of photons is not. In the limit of an average of an infinite number of subs, the noise terms drop out and only the intensity of the light source remains. When you stop at some fixed time, the quality of the estimate is constrained by the signal-to-noise ratio. 



#15 Hajfimannen

Hajfimannen

    Explorer 1

  • -----
  • topic starter
  • Posts: 74
  • Joined: 30 Apr 2018
  • Loc: Sweden

Posted 19 July 2019 - 05:13 PM

It does not matter if one photon hits per minute or 1,000 photons, eventually if shutter is left open long enough the system will saturate.

Thanks, but I must admit I am still confused. 

Let's take an exposure during 1 minute with a 16 bit camera with the magical QE of 100 %.

During this time it is hit with 65 000 photons. The same number as its FWC. That would mean a full 16 bit dynamic range (not taking read noise into consideration). 

If the camera had a QE of 50 %. Then only 32 500 photons would be registered and transfered. That would imply a 15 bit dynamic range. Or?

I am sure that I am missing some thing here and would be happy to be "enlighted". 


Edited by Hajfimannen, 19 July 2019 - 05:14 PM.


#16 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23369
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 July 2019 - 05:47 PM

Thanks, but I must admit I am still confused. 

Let's take an exposure during 1 minute with a 16 bit camera with the magical QE of 100 %.

During this time it is hit with 65 000 photons. The same number as its FWC. That would mean a full 16 bit dynamic range (not taking read noise into consideration). 

If the camera had a QE of 50 %. Then only 32 500 photons would be registered and transfered. That would imply a 15 bit dynamic range. Or?

I am sure that I am missing some thing here and would be happy to be "enlighted". 

You are probably conflating dynamic range with SNR. Dynamic range is agnostic of any amount of signal, it just has to do with the CAPACITY for signal, and the MINIMUM VIABLE signal. Dynamic range is really a hardware trait that tells you, for a given amount of read noise, how many discrete levels of useful information could the camera represent?

 

SNR, on the other hand, is inextricably related to signal. SNR has to do with the ACTUAL signal, and the TOTAL NOISE in that signal. While SNR and DR are similar in that they are both ratios, they are otherwise very different things. If you have a camera with a 50ke- FWC and 5e- rad noise, your dynamic range is 50,000/5, or 10,000 discrete steps, 10000:1. It is also 80dB of dynamic range, or 13.33 stops of dynamic range. DR describes the capabilities of the camera. DR is:

 

DRsteps = FWC/RN

DRdb = 20 * log(FWC/RN)

DRstops = 20 * log(FWC/RN) / 6

 

Now, using this hypothetical camera, lets say we expose such that we build up a signal that meets the 10xRN^2 criteria. This means we want a background signal that is at least 10 * 5e-^2, or 250e-. So you expose for a while and build up a signal that meets this criteria, and your SNR comes out to:

 

SNR10x = 250/SQRT(250 + 5^2) = 250/SQRT(275) = 250/16.6 = 15.1:1

 

While this is a good SNR, it is also very different from the 10000:1 ratio we get for dynamic rage. Even though it is with the same sensor.

 

Also note how bit depth doesn't even come into play here. Bit depth ultimately becomes another noise term, an error that is folded into all the other noise terms. If this error is large, it could be a problem...so, if you have a sensor with 13-14 stops of dynamic range but you use only 10 or 12 bits, then that is probably not optimal. You still have the same hardware dynamic range, but then you kind of stuff that dynamic range into a more limited number of output steps. You lose some of the original fineness of the information. If your dynamic range is more limited, say 12, 11.5, 10.5 stops, then 12 bits should be plenty sufficient to represent the information you are actually capable of using.

 

As for quantum efficiency...that is just another ratio. It is plain and simply the rate at which photons convert into electrons. There isn't much more to it than that. You "see" 50,000 photons, with 50% Q.E. you "get" 25,000 electrons. But that is just the conversion ratio of photons to electrons...it doesn't have anything to do with dynamic range. The only relationship it has with SNR is that with lower Q.E. it takes longer to reach a given SNR.


Edited by Jon Rista, 19 July 2019 - 06:02 PM.

  • Hajfimannen likes this

#17 ks__observer

ks__observer

    Viking 1

  • *****
  • Posts: 982
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 19 July 2019 - 05:56 PM

I think Hajf might be asking more about QE:

https://andor.oxinst...l-response-(qe)

 

Lower QE means it takes more time to gather signal.



#18 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23369
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 July 2019 - 06:01 PM

I think Hajf might be asking more about QE:

https://andor.oxinst...l-response-(qe)

 

Lower QE means it takes more time to gather signal.

I was addressing this:

 

During this time it is hit with 65 000 photons. The same number as its FWC. That would mean a full 16 bit dynamic range (not taking read noise into consideration).

  • ks__observer likes this

#19 Hajfimannen

Hajfimannen

    Explorer 1

  • -----
  • topic starter
  • Posts: 74
  • Joined: 30 Apr 2018
  • Loc: Sweden

Posted 20 July 2019 - 04:27 AM

You are probably conflating dynamic range with SNR. Dynamic range is agnostic of any amount of signal, it just has to do with the CAPACITY for signal, and the MINIMUM VIABLE signal. Dynamic range is really a hardware trait that tells you, for a given amount of read noise, how many discrete levels of useful information could the camera represent?

 

SNR, on the other hand, is inextricably related to signal. SNR has to do with the ACTUAL signal, and the TOTAL NOISE in that signal. While SNR and DR are similar in that they are both ratios, they are otherwise very different things. If you have a camera with a 50ke- FWC and 5e- rad noise, your dynamic range is 50,000/5, or 10,000 discrete steps, 10000:1. It is also 80dB of dynamic range, or 13.33 stops of dynamic range. DR describes the capabilities of the camera. DR is:

 

DRsteps = FWC/RN

DRdb = 20 * log(FWC/RN)

DRstops = 20 * log(FWC/RN) / 6

 

Now, using this hypothetical camera, lets say we expose such that we build up a signal that meets the 10xRN^2 criteria. This means we want a background signal that is at least 10 * 5e-^2, or 250e-. So you expose for a while and build up a signal that meets this criteria, and your SNR comes out to:

 

SNR10x = 250/SQRT(250 + 5^2) = 250/SQRT(275) = 250/16.6 = 15.1:1

 

While this is a good SNR, it is also very different from the 10000:1 ratio we get for dynamic rage. Even though it is with the same sensor.

 

Also note how bit depth doesn't even come into play here. Bit depth ultimately becomes another noise term, an error that is folded into all the other noise terms. If this error is large, it could be a problem...so, if you have a sensor with 13-14 stops of dynamic range but you use only 10 or 12 bits, then that is probably not optimal. You still have the same hardware dynamic range, but then you kind of stuff that dynamic range into a more limited number of output steps. You lose some of the original fineness of the information. If your dynamic range is more limited, say 12, 11.5, 10.5 stops, then 12 bits should be plenty sufficient to represent the information you are actually capable of using.

 

As for quantum efficiency...that is just another ratio. It is plain and simply the rate at which photons convert into electrons. There isn't much more to it than that. You "see" 50,000 photons, with 50% Q.E. you "get" 25,000 electrons. But that is just the conversion ratio of photons to electrons...it doesn't have anything to do with dynamic range. The only relationship it has with SNR is that with lower Q.E. it takes longer to reach a given SNR.

Thanks for taking your time for the re-run :-). I now have a better understanding of the different ingredients. What is theoretical capabilities and what is actually taking place.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics