Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Bit Depth, Recovery, Cameras, and You

  • Please log in to reply
26 replies to this topic

#1 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 15 April 2018 - 08:16 PM

Hello all,

 

There has been some discussion on the forum recently about stacking data to recover bit depth. Jon posted a wonderful set of math to help folks navigate this topic and I really appreciate his efforts to do so. As always, Jon seems to really help this community (and myself) get our minds wrapped around this crazy hobby of ours.

 

For those not familiar with the post I am talking about, here it is:

 

https://www.cloudyni...-etc/?p=8510662

 

Nevermind our bickering session we had during the latter portions of that thread, I think that is just a product of a few dudes that are very into this hobby and pushing for the best from our fellow comrades. 

 

With that said I produced a graph to help people see how this all comes together.

 

Bit Recovery Graph.JPG

 

This data uses three cameras, and walked through the math Jon posted, and plotted it out. The ASI183 is represented twice, at two different gain levels. What is missing from this data are other cameras that people own. I would love for folks to directly send me data to add to this graph. (Regardless of pixel size, noise, etc) The only thing I ask is that if you want to contribute your camera please abide by the following:

 

- All data plotted was taken with the respective cameras at -20C.

- All data was measured via the PixInsight BasicCCDParameters script. I do not want a conversation about its shortcomings If its errant than all of the data suffers from the same error and thus we are good.

- Do not send me a screenshot and expect me to do all of the math for you, if you want your camera on this list, do it yourself, but do not round the values prior to three points past the decimal. For example, if the value its 1.2581, you can use 1.258. 

- Be clear about your gain and offset values (in terms of electrons) and be clear exactly which camera you are using

- Do not submit data to me from uncooled cameras. I have zero interest in adding that to this display.

- Use the ranges I gave in terms of total exposures (the Y axis). 

- PM me, do not post in the thread with a bunch of numbers.

 

Thanks and I hope this, over time, will be very useful to people.

 

EDIT 4/16: Added the QHY16200A and FLI Proline 16803.


Edited by rockstarbill, 16 April 2018 - 06:11 PM.

  • Jon Rista, bmhjr, tkottary and 1 other like this

#2 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 16 April 2018 - 11:07 PM

The chart has been updated to include more cameras.

 

Bit Recovery Graph.JPG

 

The FLI 16803 data looks really good, but was based on a study from the University of Hawaii. If someone has a legit FLI 16803 that can send me output from BasicCCDParameters, I would gladly do the math and add it to this mix.

 

I am working on a secondary chart that maps out dynamic range, and will start to show these together. 

 

 



#3 sharkmelley

sharkmelley

    Vanguard

  • *****
  • Posts: 2324
  • Joined: 19 Feb 2013

Posted 17 April 2018 - 12:22 AM

Can you explain the meaning of "Effective Bit Depth" without me trawling through that humongous thread?

 

If I calculate it for a single sub, the "Effective Bit Depth" would appear to vary according to how far across the histogram peak sits because that in turn affects sky noise.

 

So Effective Bit Depth appears to be a quantity that is not fixed for a particular camera at a particular gain setting but varies according to shooting strategy.  If so, what is the point in making graphs of it?

 

Or am I misinterpreting this, which is always possible with me wink.gif

 

Mark


Edited by sharkmelley, 17 April 2018 - 12:26 AM.

  • rockstarbill likes this

#4 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 April 2018 - 12:23 AM

Can you explain the meaning of "Effective Bit Depth" without me trawling through that humongous thread?

 

If I calculate it for a single sub, the "Effective Bit Depth" would appear to vary according to how far across the histogram peak sits because that in turn affects sky noise.

 

So Effective Bit Depth appears to be a quantity that is not fixed for a particular camera at a particular gain setting but varies according to shooting strategy.

 

Or am I misinterpreting this?

 

Mark

I included a link to the basis here that is a direct link to the post with the logic. Its not mine, it is Jon's.

 

https://www.cloudyni...26#entry8510662

 

I just calculated and made a graph out of it.

 

It makes two big assumptions:

 

1. The duration of the image is 600 seconds.

2. The Sky is modeled as 10xRN^2 for the purpose of determining the Sky Noise.

 

With those terms constant, and the properties of the cameras themselves factored in, the graph is telling you as you stack more images, how the effective bit depth of the resulting stack will end up. 


Edited by rockstarbill, 17 April 2018 - 12:26 AM.


#5 sharkmelley

sharkmelley

    Vanguard

  • *****
  • Posts: 2324
  • Joined: 19 Feb 2013

Posted 17 April 2018 - 12:35 AM

But a 600sec exposure could have a huge range of possible sky noise readings which would in turn affect the calculation of "Effective Bit Depth".

 

I'm not really understanding this and it appears you don't either wink.gif

 

Mark


Edited by sharkmelley, 17 April 2018 - 12:37 AM.


#6 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 April 2018 - 12:46 AM

But a 600sec exposure could have a huge range of possible sky noise readings which would in turn affect the calculation of "Effective Bit Depth".

 

I'm not really understanding this and it appears you don't either

Yes it could, but you have to hold those terms constant to build a model that is agnostic of a particular regions sky, and to hold all of the cameras constant in terms of their thermal noise contributions. If the sky itself is constant and modeled you can take the square root of the modeled sky, and that will give you the sky noise level. If that model is 10*ReadNoise^2 you have a workable model that can consider all of the cameras equally. Similarly, the 600 sec exposure constant allows the thermal noise (i.e. 0.002e/sec of a QSI6120 or 0.02e/sec of a QHY16200A) to be considered appropriately. 

 

I understand enough of this to see that there was some use in building a visual to show some comparisons. A factor in learning things is to take something, work with it, go back to the source data, and continue to refine. That is what I am doing here, and I just so happen to be sharing the journey with people. 


Edited by rockstarbill, 17 April 2018 - 01:21 AM.


#7 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 April 2018 - 02:20 AM

Can you explain the meaning of "Effective Bit Depth" without me trawling through that humongous thread?

 

If I calculate it for a single sub, the "Effective Bit Depth" would appear to vary according to how far across the histogram peak sits because that in turn affects sky noise.

 

So Effective Bit Depth appears to be a quantity that is not fixed for a particular camera at a particular gain setting but varies according to shooting strategy.  If so, what is the point in making graphs of it?

 

Or am I misinterpreting this, which is always possible with me wink.gif

 

Mark

I call it real world bit depth or perhaps more appropriately real world precision. It's just a gauge of how many discrete steps of information you actually resolved in an image. FWC over total noise in the background sky, differing from hardware dynamic range which is FWC over only read noise. I've never felt DR was an appropriate term to apply to an image, since there are additional noise terms on top of read noise that affect the smallest discrete step of information you could discern. It's a rough gauge of precision in a sub or a stack...but, it is not a particularly important measure. SNR is still king. Like SNR, though, if you measured different regions of an image, the result would be different. Hence the reason I only account for the median background sky signal. 



#8 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8823
  • Joined: 12 Oct 2007

Posted 17 April 2018 - 03:18 AM

Hi-

 

I haven't looked at the details behind this stuff but it all seems ok to me from what I saw.

 

The one issue I have is the way the graphs are shown - with the axis starting at 8.  This exaggerates the difference between the cameras, and in a situation like this it would be better to start the axis with zero on the left.

 

At the same time - both of those representations are inherently logarithmic - since the actual count of possible pixel values goes as 2^BitDepth.  So you could instead plot in a linear way with 0 on the left - and a bit depth of 8 would plot at 64, while a bit depth of 10 would be at 1024.

 

Both the log version and the linear version would allow a fair comparison - but in both cases you would want 0 on the left.

 

The first graph makes the 183 look really bad - but it's actually not a big jump to the next one.

 

Frank


  • rockstarbill and Jon Rista like this

#9 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 April 2018 - 03:23 AM

Hi-

 

I haven't looked at the details behind this stuff but it all seems ok to me from what I saw.

 

The one issue I have is the way the graphs are shown - with the axis starting at 8.  This exaggerates the difference between the cameras, and in a situation like this it would be better to start the axis with zero on the left.

 

At the same time - both of those representations are inherently logarithmic - since the actual count of possible pixel values goes as 2^BitDepth.  So you could instead plot in a linear way with 0 on the left - and a bit depth of 8 would plot at 64, while a bit depth of 10 would be at 1024.

 

Both the log version and the linear version would allow a fair comparison - but in both cases you would want 0 on the left.

 

The first graph makes the 183 look really bad - but it's actually not a big jump to the next one.

 

Frank

Great feedback, Frank. I updated the graph.

Attached Thumbnails

  • Bit Recovery Graph.JPG

  • bmhjr likes this

#10 sharkmelley

sharkmelley

    Vanguard

  • *****
  • Posts: 2324
  • Joined: 19 Feb 2013

Posted 17 April 2018 - 03:25 PM

I call it real world bit depth or perhaps more appropriately real world precision. It's just a gauge of how many discrete steps of information you actually resolved in an image. FWC over total noise in the background sky, differing from hardware dynamic range which is FWC over only read noise. I've never felt DR was an appropriate term to apply to an image, since there are additional noise terms on top of read noise that affect the smallest discrete step of information you could discern. It's a rough gauge of precision in a sub or a stack...but, it is not a particularly important measure. SNR is still king. Like SNR, though, if you measured different regions of an image, the result would be different. Hence the reason I only account for the median background sky signal. 

Thanks for the explanation.  I read through it again and I can see what you're doing now.  It's effectively a modified dynamic range calculation - dynamic range calculated in the case where the read noise is swamped by sky glow.

 

Mark



#11 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 April 2018 - 03:29 PM

Thanks for the explanation.  I read through it again and I can see what you're doing now.  It's effectively a modified dynamic range calculation - dynamic range calculated in the case where the read noise is swamped by sky glow.

 

Mark

Yup!



#12 ChrisWhite

ChrisWhite

    Aurora

  • *****
  • Posts: 4710
  • Joined: 28 Feb 2015
  • Loc: Colchester, VT

Posted 17 April 2018 - 05:10 PM

Bill, I think that the most recent graph could be made even better.  Stand them on end as if they are sky scrapers and graph based on the number of discrete levels each bit depth represents.  The new graph fairly shows bit depth comparison, but also minimizes the impact that a single bit of recovery has on the number of discrete levels available. 

 

Because of the logarithmic impact with each increase in Bit recovery a linear comparison does not communicate the story here very well.



#13 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 April 2018 - 05:44 PM

I'll try to get to that this week, its tax night and I have a few other lingering tasks to get done. :) 



#14 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8823
  • Joined: 12 Oct 2007

Posted 17 April 2018 - 05:55 PM

Bill, I think that the most recent graph could be made even better. Stand them on end as if they are sky scrapers and graph based on the number of discrete levels each bit depth represents. The new graph fairly shows bit depth comparison, but also minimizes the impact that a single bit of recovery has on the number of discrete levels available.

Because of the logarithmic impact with each increase in Bit recovery a linear comparison does not communicate the story here very well.


That’s why I suggested doing a linear version but still with 0 axis origin. But ultimately the images will be Nonlinearly stretched and reduced to 8 bits per channel. So it’s hard to interpret the actual benefit of bit depth anyway.

Frank
  • ChrisWhite likes this

#15 ChrisWhite

ChrisWhite

    Aurora

  • *****
  • Posts: 4710
  • Joined: 28 Feb 2015
  • Loc: Colchester, VT

Posted 17 April 2018 - 06:30 PM

This is the kind of stuff that happens when all we have are cloudy nights...
  • Jon Rista likes this

#16 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 April 2018 - 06:54 PM

Totally

#17 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 April 2018 - 08:20 PM

This is the kind of stuff that happens when all we have are cloudy nights...

For months on end...


  • WesC likes this

#18 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 April 2018 - 08:20 PM

That’s why I suggested doing a linear version but still with 0 axis origin. But ultimately the images will be Nonlinearly stretched and reduced to 8 bits per channel. So it’s hard to interpret the actual benefit of bit depth anyway.

Frank

Eventually, yes. But with PI at least, you can do quite a lot of processing, most of it even, on the linear data. 



#19 rockstarbill

rockstarbill

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6316
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 18 April 2018 - 08:13 PM

Anyhow, now that some of the hand wagging and sizing of ones own has been done...

 

I updated the format of the graph. This is the last time I am doing this. What is more important here is the data itself. At this point its more likely that this graph will just be stagnant. 

 

 

Attached Thumbnails

  • Bit Recovery Graph.JPG


#20 ChrisWhite

ChrisWhite

    Aurora

  • *****
  • Posts: 4710
  • Joined: 28 Feb 2015
  • Loc: Colchester, VT

Posted 18 April 2018 - 09:07 PM

I was not thinking you just rotate, but rather graph discrete steps... More telling than bit depth imo

#21 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 18 April 2018 - 09:54 PM

I was not thinking you just rotate, but rather graph discrete steps... More telling than bit depth imo

Keep in mind, it is not literal bit depth. It is a representation of the discrete number of steps of information that can be represented in an image. We can call it stops, or bits, but it isn't exactly either, per-se. You can store the information in any kind of container, and if the container has a lower precision than the information, you'll lose something. If you store the data in a 32-bit float file, then the container has little impact on the precision of the data itself, and can represent the noise itself at potentially a much higher level of precision than the number of discrete steps of useful information. 



#22 sharkmelley

sharkmelley

    Vanguard

  • *****
  • Posts: 2324
  • Joined: 19 Feb 2013

Posted 19 April 2018 - 12:33 AM

Keep in mind, it is not literal bit depth.

Exactly.  It is something very different.  I much prefer the standard measure of dynamic range (saturation level divided by read noise) because it is something that everyone understands and it is directly comparable to the DSLR world where so-called ISO-less cameras can achieve very high dynamic range for short exposures at low ISO.

 

I can see where you are going with your "effective bit depth" measure but it only really applies for long exposure astrophotography where read noise is being deliberately swamped and hence dynamic range is being deliberately forfeited

 

Mark



#23 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 April 2018 - 01:05 AM

Exactly.  It is something very different.  I much prefer the standard measure of dynamic range (saturation level divided by read noise) because it is something that everyone understands and it is directly comparable to the DSLR world where so-called ISO-less cameras can achieve very high dynamic range for short exposures at low ISO.
 
I can see where you are going with your "effective bit depth" measure but it only really applies for long exposure astrophotography where read noise is being deliberately swamped and hence dynamic range is being deliberately forfeited
 
Mark


Well, it is not just about swamping read noise...the total noise is taken into account, whether you swamp or not.

The thing about normal DR, saturation/read noise, is it only works for a single sub. It doesn't tell you how much precision or "dynamic range"...or stops...or bit depth...you have in a stack, with additional offsets eating up some of the original DR, and additional sources of noise, which is what I was trying to address.

 

Anyway, it's just a gauge of how much effective precision you have in the data. Practically speaking, I don't think anyone can see or for that matter generally use (outside of some extreme faint object hunting) the difference beyond 14 effective bits, maybe even 12, and the original idea (just for myself) was just to see what it would take to get over that threshold. It's not a critically important factor...


Edited by Jon Rista, 19 April 2018 - 01:11 AM.

  • sharkmelley likes this

#24 bmhjr

bmhjr

    Apollo

  • ****-
  • Posts: 1254
  • Joined: 02 Oct 2015
  • Loc: Colorado

Posted 19 April 2018 - 08:34 AM

I think this is very useful information. Is there any way to include an integration time factor? As a mobile imager, an important factor for me to consider when choosing equipment is what type of quality can I get in a fixed amount of time.

If a fixed total integration time were chosen and the subexposure length chosen to achieve the necessary sky background level for each camera, I assume the exposure lengths would vary. So in a given total integration time you will have stacks with different numbers of exposures for different cameras. The resulting stack bit depth from a total integration standpoint may show that the gap between the cameras may narrow. Maybe not?

Edited by bmhjr, 19 April 2018 - 08:38 AM.


#25 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23721
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 April 2018 - 11:10 AM

I think this is very useful information. Is there any way to include an integration time factor? As a mobile imager, an important factor for me to consider when choosing equipment is what type of quality can I get in a fixed amount of time.

If a fixed total integration time were chosen and the subexposure length chosen to achieve the necessary sky background level for each camera, I assume the exposure lengths would vary. So in a given total integration time you will have stacks with different numbers of exposures for different cameras. The resulting stack bit depth from a total integration standpoint may show that the gap between the cameras may narrow. Maybe not?

Check the original post Bill linked. I have all the math in there. The ultimate goal is to evaluate the precision in a stack, not individual subs, so everything you need should be in the post.


  • bmhjr likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics