Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Is DSLR Unity Gain Useful?

This topic has been archived. This means that you cannot reply to this topic.
67 replies to this topic

#1 sharkmelley

sharkmelley

    Aurora

  • *****
  • topic starter
  • Posts: 4,826
  • Joined: 19 Feb 2013

Posted 24 May 2015 - 03:04 AM

Is the usefulness of DSLR unity gain a myth?

 

The way I see it (but I'm happy to be corrected) is that for deep sky imaging, the ISO that gives unity gain is an optimal starting point.

 

Increasing ISO beyond the point of unity gain means that dynamic range is being sacrificed with no extra accuracy in "countin electrons".  However, increasing the ISO beyond the point of unity gain is worthwhile if it gives a meaningful reduction in read noise - this effect varies from camera to camera.

 

Decreasing ISO below the point of unity gain means that ever larger numbers of photons are required to record one digital unit, which introduces an effect known as quantisation error and is best avoided.

 

I'm not aware of any "myth" so I'm interested to hear other's thoughts.

 

By the way, there is an easy way to determine unity ISO for your camera, assuming it appears in the list at http://www.sensorgen.info/ and assuming the accuracy of those figures.

 

For a 12 bit camera the unity gain occurs at the ISO whose saturation is nearest 4096

For a 14 bit camera the unity gain occurs at the ISO whose saturation is nearest 16384

For a 16 bit camera (do they exist?)  the unity gain occurs at the ISO whose saturation is nearest 65536

 

Mark


Edited by sharkmelley, 24 May 2015 - 06:02 AM.


#2 whwang

whwang

    Soyuz

  • *****
  • Posts: 3,705
  • Joined: 20 Mar 2013

Posted 24 May 2015 - 07:42 AM

Hi Mark,

 

Conceptually I think it is good to know what unity gain is and what it intends to imply.  However, "intends to imply" doesn't mean it actually implies.  The biggest reason is readout noise.  The idea of using unity gain to limit the quantization error to be less than 1 electron.  However, when there is always 3 to 4 electrons of readout noise at ISO < 1000, there is little point caring too much about the quantization error of less than 1 electron.  Even a 2-electron quantization error is fine.  Now, with the presence of photon noise, the total uncertainty would be 4 to 6 electrons at least, so even a 3-electron quantization noise is acceptable.  This means that even using an ISO that's 3x or even 4x lower than the unity gain ISO is not likely to hurt us.

 

Cheers,

Wei-Hao



#3 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7,388
  • Joined: 26 Aug 2012

Posted 24 May 2015 - 02:05 PM

...

For a 12 bit camera the unity gain occurs at the ISO whose saturation is nearest 4096

For a 14 bit camera the unity gain occurs at the ISO whose saturation is nearest 16384

For a 16 bit camera (do they exist?)  the unity gain occurs at the ISO whose saturation is nearest 65536

 

Mark

 

Mark,

This is not my understanding of unity gain.  These values show the gain required to simply fill the ADU range for an object at maximum exposure.  This guarantees that the full well depth spans the range of the ADU.  Unity gain is the gain required to trigger one LSB with one photoelectron.  The basic idea is that you don't want to toss out individual photoelectron events.  Wei-Hao is totally correct that at that level you will also see stray charge from read noise and other sources that may erode the theoretical benefits of using unity gain.

John



#4 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 10,792
  • Joined: 12 Oct 2007

Posted 24 May 2015 - 04:33 PM

 

I'm not aware of any "myth" so I'm interested to hear other's thoughts.

 

By the way, there is an easy way to determine unity ISO for your camera, assuming it appears in the list at http://www.sensorgen.info/ and assuming the accuracy of those figures.

 

 

 

The web page you point to - itself points to this web site: http://www.clarkvisi....html#full_well  where he describes "unity gain" as a flawed concept - and he describes some of the other factors in dealing with DSLR's and ISO's that affect performance.

 

An important point I see overlooked in all this is that even at 1 e/adu, the actual noise represented by digitization is not 1e - it is 1/sqrt(12) e or about 0.29e - when matched to a Gaussian.  You would need about 3.5 e/adu to have digitization noise of 1e - and even then it would be dominated by read noise.

 

I view dslr's as black boxes that may not behave as expected - and a good example is the point made by Clark that the pattern noise behaves somewhat unpredictably with ISO - and may be cleaner in steps of 2x.  I don't know if that's true - but it is a reminder that these things are devices made by humans, with firmware and circuitry for specific tasks - and they aren't guaranteed to behave according to simple theory.

 

CCD's often don't have adjustable gain and the gain is set to a value that roughly saturates in the full bit range of the device - which is usually 16-bits.  There is no attempt to be 1 e/adu and it could be less or more depending on the full well depth.

 

For a dslr everything is a compromise - but there is a clear win if the read noise goes down with higher ISO.  That is a big motivation to use high ISO - independent of the reduction of digitization noise, which is quickly negligible.  But it may not be much of a win if you have a bright sky background (or fast optics - etc).

 

So the basic trade offs apply: If you have a faint scene and you don't care about saturating stars and you have a dark sky - use high ISO to reduce read noise.  If you have a bright scene and/or sky fog - digitization noise and read noise are also negligible so you mainly want maximum dynamic range without saturation - so use lower ISO.

 

And for the particular camera - avoid ISO values that for some reason have more pattern noise than others.

 

Frank



#5 sharkmelley

sharkmelley

    Aurora

  • *****
  • topic starter
  • Posts: 4,826
  • Joined: 19 Feb 2013

Posted 25 May 2015 - 06:45 AM

 

...

For a 12 bit camera the unity gain occurs at the ISO whose saturation is nearest 4096

For a 14 bit camera the unity gain occurs at the ISO whose saturation is nearest 16384

For a 16 bit camera (do they exist?)  the unity gain occurs at the ISO whose saturation is nearest 65536

 

Mark

 

Mark,

This is not my understanding of unity gain.  These values show the gain required to simply fill the ADU range for an object at maximum exposure.  This guarantees that the full well depth spans the range of the ADU.  Unity gain is the gain required to trigger one LSB with one photoelectron.  The basic idea is that you don't want to toss out individual photoelectron events.  Wei-Hao is totally correct that at that level you will also see stray charge from read noise and other sources that may erode the theoretical benefits of using unity gain.

John

 

 

I think I haven't explained myself well :(

 

Take a camera on the sensorgen site e.g the Canon 350D with which i'm very familiar:

http://www.sensorgen...onEOS-350D.html

 

This is a 12 bit camera, so look at the table of ISO against saturation and you'll see that ISO 800 is the one that gives saturation nearest 4096 and so is the ISO closest to unity gain.

 

Or to put it another way, at ISO 800 a digital value of 4096 is equivalent to 5283 electrons.  So you can see it is getting close to unity gain. i.e. it triggers a LSB with one photoelectron (more or less).

 

Mark



#6 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 10,792
  • Joined: 12 Oct 2007

Posted 25 May 2015 - 08:03 AM

You could have a 12-bit camera with "unity gain" - which you appear to agree means a gain value of 1 e/adu - but the full well capacity is 40,000 e - which means it would never saturate in those 12-bits and it would be linear throughout.  "unity gain" normally means one thing - that the gain is 1 e/adu.  It says nothing about the full well capacity or how nicely it fits in the given bit count.  It could saturate under or over the given bit count.

 

You seem to be asking if it is desirable to have the full well fit into the given bit count of the camera - and that does make sense for maximizing dynamic range - but it is independent of "unity gain."  And in that situation, you may have increased the read noise by making the gain value larger in e/adu - so it has a disadvantage.

 

If you read my prior note, my main points are that there is nothing special about 1 e/adu at all - and there are advantages and disadvantages to raising or lowering the gain value to avoid clipping bright features.  Many factors are at play with a dslr and a given imaging goal - and one thing that has little intrinsic value is a gain value near 1 e/adu.

 

Frank



#7 sharkmelley

sharkmelley

    Aurora

  • *****
  • topic starter
  • Posts: 4,826
  • Joined: 19 Feb 2013

Posted 11 August 2015 - 04:37 PM

Hi Mark,

 

Conceptually I think it is good to know what unity gain is and what it intends to imply.  However, "intends to imply" doesn't mean it actually implies.  The biggest reason is readout noise.  The idea of using unity gain to limit the quantization error to be less than 1 electron.  However, when there is always 3 to 4 electrons of readout noise at ISO < 1000, there is little point caring too much about the quantization error of less than 1 electron.  Even a 2-electron quantization error is fine.  Now, with the presence of photon noise, the total uncertainty would be 4 to 6 electrons at least, so even a 3-electron quantization noise is acceptable.  This means that even using an ISO that's 3x or even 4x lower than the unity gain ISO is not likely to hurt us.

 

Cheers,

Wei-Hao

 

Coming back to this discussion, I realise that you are making a very important point with interesting implications.  I've done some statistical experiments that show that where read noise is the dominant noise source (e.g. short exposures) then the quantization can be allowed to start approaching the read noise with no noticeable effect e.g. if read noise is say 4e then an ISO 2-3x lower than unity gain is absolutely fine.  If the background light pollution glow is the main source of noise then even lower ISOs are quite safe i.e. the extra quantization will not affect the signal to noise ratio in the final stacked image.

 

The implication is that for long exposure imaging, lower ISOs can be used than are generally recommended and this usually give the happy side effect of increasing the dynamic range which means less saturation of bright stars.

 

Mark



#8 Herra Kuulapaa

Herra Kuulapaa

    Viking 1

  • -----
  • Posts: 514
  • Joined: 10 Dec 2014

Posted 12 August 2015 - 02:22 AM

Hi,

 

I've been a little bit sceptical using unity gain ISO with my D600. Practically because I'm just used to think that Astrophotography should be done with ISO800 and above ;)

 

Mark's tool reports following when camera is used in cool environment:

ISO 500 / 10min darks

Gain per channel(e/ADU):   [ 0.978, 0.979, 0.975, 0.975 ]
ISO for unit gain:         [ 489, 489, 487, 488 ]

 

Noise estimates for 600sec exposure
Read Noise(e):             [ 2.91, 2.9, 2.92, 2.89 ]
Thermal Noise(e):          [ 2.6, 3.63, 3.37, 3.51 ]

 

So, I did a proper astrophotography test with Ha filter, ISO500 and true dark current mode, which resulted visible pattern noise in certain areas, master bias cleaned that fine thoug and unstacked source data looked very good. All stars well within ADU range. But still I'm having doubts. What do you think?

 

EDIT: Moved link from Luis's thread to here in order to avoid spamming links everywhere..

http://www.cloudynig...me-test-images/


Edited by Herra Kuulapaa, 12 August 2015 - 02:39 AM.


#9 whwang

whwang

    Soyuz

  • *****
  • Posts: 3,705
  • Joined: 20 Mar 2013

Posted 12 August 2015 - 01:55 PM

Hi,

 

I've been a little bit sceptical using unity gain ISO with my D600. Practically because I'm just used to think that Astrophotography should be done with ISO800 and above ;)

 

 

 

 

Taking astrophotos at high ISO is a habit from the film era.  In the digital world, it has to be justified with hard numbers, such as noise and quantization error (which can also be considered as a kind of noise).

 

Take a look at the following two pictures of mine:

http://www.astrobin.com/79055/

http://www.astrobin.com/77963/

They were taken at ISO 200 (not 2000!) with an F7 optics, and the targets are faint.

 

Cheers,

Wei-Hao



#10 Synon

Synon

    Mariner 2

  • *****
  • Posts: 265
  • Joined: 02 Jun 2012

Posted 12 August 2015 - 05:07 PM

I'm trying to understand all this... is the correct takeaway that ISO really doesn't matter very much for our purposes? Given equal integration time, does a lower ISO (and longer exposures) really give you more dynamic range than more frames at a higher ISO? Or do you gain back that dynamic range when stacking?

 

I'd love to just stick with a single ISO setting, would make dark frame library creation so much easier. 



#11 whwang

whwang

    Soyuz

  • *****
  • Posts: 3,705
  • Joined: 20 Mar 2013

Posted 12 August 2015 - 06:56 PM

I'm trying to understand all this... is the correct takeaway that ISO really doesn't matter very much for our purposes? Given equal integration time, does a lower ISO (and longer exposures) really give you more dynamic range than more frames at a higher ISO? Or do you gain back that dynamic range when stacking?

 

I'd love to just stick with a single ISO setting, would make dark frame library creation so much easier. 

 

There is never a uniform answer to questions like this.  It depends a lot on your shooting conditions and habit of imaging.  There are lots of factors in the equation.

 

A short answer to part of your questions is, you only get marginally higher dynamical range by lowering the ISO and increasing the exposure time.  If you do this, saturated pixels are still saturated.  The way to dramatically improve dynamical range is to lower the ISO and FIX your exposure time.  Only by doing so, you get less saturated pixels.

 

For the purpose of building a dark/flat/bias library, it is indeed a good thing to use a single ISO setting.  For deep-sky astrophoto, one way (to me, the best way) to pick the right ISO is to look at how the read noise changes with ISO (check www.sensorgen.info) for your camera.  On most cameras, the read noise decreases when ISO increases, and then it hits a floor. The floor happens at about ISO 800 to 3200, depending on the camera.  Pick the lowest ISO (for the sack of dynamical range) where the read noise is at (or close to) the floor.  This is where you get the best balance between the S/N of faint objects and dynamical range for bright objects.  For objects fainter than what a single exposure can show, you stack many exposures.  For objects brighter than the saturation level of a single exposure, you take shorter exposures and do some HDR processing afterward.

 

Cheers,

Wei-Hao



#12 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,984
  • Joined: 10 Jan 2014

Posted 12 August 2015 - 07:06 PM

I used to stick with a single ISO setting myself, but in practice, there is more to be concerned about than simply how much electronic noise you have in total. It also does not really take all that much to swamp all of the electronic noise in a camera with background sky, even at a decent dark site (i.e. ~21mag/sq") when doing RGB imaging.

In practice, I think dynamic range, particularly how limited it may be at a given ISO, becomes a bigger source of concern in practical situations for RGB imaging. I like ISO 1600 on my 5D III as it is nearly devoid of pattern noise and has really low read noise. However it is also pretty short on dynamic range, and it is extremely easy to clip stars. I find myself using lower ISO settings lately in order to get more dynamic range and preserve star quality as much as I can.

I sometimes find myself imaging at ISO 400, which on the 5D III DOES have some banding noise, and aiming for 1/5 to 1/4 histogram rather than 1/3rd histogram, in order to keep my stars from clipping. In my case, I am only using a 150mm aperture 600mm f/4 lens. It's a big-ish aperture, but far from the largest, and far from the fastest scope (say compared to an 8" or 9.25" hyperstar.)

In general, I do not find that any amount of electronic noise poses a problem for my DSLR RGB imaging, even when I am at a 21.5mag/sq" dark site (about 13-16x darker than my back yard, which allows me to expose 3-4 stops longer per sub.)

I think the area where read noise, quantization noise, etc. become a more important factor is when you are either at an exceptionally dark site (22mag/sq" or so, which can be ~40x darker or more than the average red zone), or imaging with narrow band filters. In both cases, you can expose for very long periods of time, and contrast can increase significantly. With RGB you can still swamp electronic noise with airglow, but you might need 12-15 minute subs to do it. With narrow band, especially with 5nm or 3nm filters, testing done by a number of guys in the BII forum have shown that you can expose for 75-90 minutes before your background sky noise swamps read noise under heavy LP, and none have ever fully exposed the background sky at any degree of dark site.

Most of us don't have access to pristine 22mag/sq" dark skies, and since the topic is on the effective Unity Gain ISO for DSLRs, narrow band does not really apply (and if you try, I think there are significantly greater concerns about the consequences of doing so with an RGB DSLR than read noise). In practice, I doubt that small differences in quantization noise, even if they are up to or more than the 2-4e- read noise at higher ISOs, are going to matter most of the time. In practice, I think that clipped stars due to the much more limited dynamic range available in most DSLRs is more of an issue...at least, it has been the largest issue I have run into with my own imaging with a 5D III and 7D.

#13 Herra Kuulapaa

Herra Kuulapaa

    Viking 1

  • -----
  • Posts: 514
  • Joined: 10 Dec 2014

Posted 13 August 2015 - 03:40 AM

I’m a narrow band imager with the D600. In this case, using the unity gain (ISO500) may actually be quite beneficial compared to ISO800-1600. Full well saturation capacity is clearly larger, read noise doesn’t practically change and one can pull out the signal without any issues.

 

I used 10min subs with the 12nm Ha filter to take that elephant’s trunk and it’s absolutely great to have every single star within the saturation range. Last spring with cooled D5100 and higher ISO wasn’t that way, so also meeting its unity gain should make the difference.

 

What I’m thinking is that 1e-/ADU ISO is the setting to be used, at least in my case. I shouldn’t need to increase the exposure time because the same stretching can be done in PP with proper calibrated sub files (dark current mod), than what higher ISO is doing in the camera. I could of course increase the exposure to further improve SNR and still keep stars within range, but in theory with the same exposure I should only gain with the unity gain :)

 

Details:

 http://www.sensorgen.info/NikonD600.html


Edited by Herra Kuulapaa, 13 August 2015 - 07:37 AM.


#14 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 10,792
  • Joined: 12 Oct 2007

Posted 13 August 2015 - 07:49 AM

I'm not clear how this thread came back - but the points I made earlier are still relevant. There is nothing special at all about "unity gain" - in terms of 1e/adu. Quantization noise is always present and it has an effective sigma of 0.29 * g, where g is the gain in e/adu.

It seems like the abrupt steps going from one adu to another would need special treatment - but when combined with other noise terms it can be thought of as Gaussian - by the central limit theorem.

The real issues are the other noise terms - read noise and sky background - plus residual pattern noise - along with saturation and loss of color at high ISO.

But those trade-offs will always apply even without quantization noise being a factor - and they have nothing to do with something special happening at 1 e/adu.

So you can estimate the effective size of all the noise terms - and at unity gain the quantization noise is tiny 0.29e - compared to read noise and other terms.

It's good to minimize all the noise terms - but it doesn't help to focus on a small term when others are present and much larger.

Of course - if dslr's don't actually behave according to the standard noise models then you just have to experiment. But in that case you have to throw the noise models out the window and do everything empirically. And in that case unity gain applies even less - because the whole thing is a black box that is hard to model.

Frank

#15 sharkmelley

sharkmelley

    Aurora

  • *****
  • topic starter
  • Posts: 4,826
  • Joined: 19 Feb 2013

Posted 14 August 2015 - 01:22 PM

I'm not clear how this thread came back - but the points I made earlier are still relevant. There is nothing special at all about "unity gain" - in terms of 1e/adu. Quantization noise is always present and it has an effective sigma of 0.29 * g, where g is the gain in e/adu.

Frank

 

The thread came back because I'm still trying to digest the implications of the points raised in the discussion and also from some statistical analysis I performed.

 

Your formula for quantization noise is quite correct and I've realised it has some interesting practical consequences.  For instance, looking at the read noise estimates for the  Sony A7S (see Thierry Legault's site http://www.astrophot...s_measures.html ) it is noticeable that the large increase in measured read noise at low ISOs is mainly the result of increasing quantization error. The sensor itself is probably ISO-less.  Similar effects are probably seen in other cameras.

 

I was always a firm believer in the myth of using the unity gain ISO, even if it sacrificed dynamic range, because I had wrongly assumed that any quantization of data in the raw files would automatically result in a high level of quantization in the stacked data (which would destroy faint details) or a noticeable increase in its noise.

 

Now my thoughts are the following:

 

Since the quantization noise is included within the measured read noise, then for a given exposure length, it is safe to look at the graph of read noise vs ISO to choose the lowest ISO that doesn't result in a noticeable increase in read noise (though other variables such as background pattern noise should also be taken into account).

 

For exposures where the noise is dominated by background skyglow noise then quantization can be allowed to increase still further i.e. an even lower ISO can be used with no effect on the quality of the final image, as long as the skyglow noise still dominates the higher read noise.  For my Sony A7S it might actually mean I could be using ISO 400 instead of ISO 2000 or 4000 for skyglow limited imaging.

 

I'm very interested to perform some actual imaging experiments to verify the theory.

 

Mark



#16 mpgxsvcd

mpgxsvcd

    Vanguard

  • -----
  • Posts: 2,056
  • Joined: 21 Dec 2011

Posted 14 August 2015 - 01:30 PM

Hi Mark,

 

Conceptually I think it is good to know what unity gain is and what it intends to imply.  However, "intends to imply" doesn't mean it actually implies.  The biggest reason is readout noise.  The idea of using unity gain to limit the quantization error to be less than 1 electron.  However, when there is always 3 to 4 electrons of readout noise at ISO < 1000, there is little point caring too much about the quantization error of less than 1 electron.  Even a 2-electron quantization error is fine.  Now, with the presence of photon noise, the total uncertainty would be 4 to 6 electrons at least, so even a 3-electron quantization noise is acceptable.  This means that even using an ISO that's 3x or even 4x lower than the unity gain ISO is not likely to hurt us.

 

Cheers,

Wei-Hao

 

I still find it hard to believe that we can be accurate in our measurements down to a single electron. I don't think the theory behind Unity Gain is that far off base. However, I think the practical measurement of it is not nearly as accurate as what everyone on this board seems to think it is.



#17 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,984
  • Joined: 10 Jan 2014

Posted 14 August 2015 - 04:10 PM

Does anyone actually have any visual examples of when quantization noise becomes a problem, and what it actually does to the data? For individual subs, and for integrations? I understand the mathematical concept of quantization noise...but I don't believe I've ever actually observed any meaningful effect from it. That may simply be due to the fact that I use the very noisy Canon 5D III, and the read noise in that camera, along with any photon noise from airglow (when I image at my dark site) are just at a greater magnitude than any quantization noise. Maybe with better equipment, such as the D810, quantization noise has the potential to become a bigger issue?

 

Anyway, just a curiosity. Would be interesting to actually SEE what quantization noise does, and how it might be revealed through integration.



#18 whwang

whwang

    Soyuz

  • *****
  • Posts: 3,705
  • Joined: 20 Mar 2013

Posted 14 August 2015 - 04:24 PM

It will be indeed interesting if we can design an experiment to quantitatively (not necessarily visually) show the quantization noise.  I don't know how we can achieve this.  As pointed out by Mark and Frank above, it is part of read noise.  It is not trivial to isolate quantization noise from the other part of read noise.  If there is a way to do it, I will be happy to try and show the results to everybody.

 

Any suggestion?

 

Cheers,

Wei-Hao



#19 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,984
  • Joined: 10 Jan 2014

Posted 14 August 2015 - 04:38 PM

Right. It's just part of the noise the camera generates. Someone mentioned higher up in the thread that the effect of quantization noise at higher ISOs seemed to be revealed more by integration, which made me wonder. Does it show up as posterization? I would think that quantization error would be pretty uniform at a given gain/ISO setting, so it wouldn't deviate from frame to frame. If that is indeed the case, I could see how averaging subs together might "reveal" the effects of quantization noise...but, I've never seen anything like that. Probably because I just haven't had the opportunity to integrate that much data, and also probably because when I do integrate a LOT of data, the first thing that shows up, albeit faintly, is banding.



#20 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 10,792
  • Joined: 12 Oct 2007

Posted 14 August 2015 - 05:26 PM

I wouldn't say quantization noise is part of the read noise or included in it. I'd just say it's another noise term that's always present - and it adds in quadrature with the others.

If you measure read noise using two bias frames - then yes the number you get would include quantization noise - but it's usually a small contribution so people ignore it. But if you know the gain and you know the measured read noise from two bias frames - then you could deduce what the true read noise is. I don't know if anyone does this - but I guess it would be a good thing to do if the read noise is small and comparable to quantization noise.

So it would be

TrueReadSigma = sqrt(MeasuredReadSigma^2 - g^2/12)

where g is gain in e/adu.

As for banding - it might be visible in a single sub if the gain value is very high in e/adu - but calibration would usually convert it to a floating point image and each calibrated sub would be slightly different - so it would be hard for the banding to show in the final result. But a basic point is - if you don't see banding in the final result - you know it's not a factor.

And regarding the measurement of 1e - you may not be able to measure the impact of 1e in a single sub - but if you have enough subs - the accumulated effect of 1e can rise above the other noise terms and become visible. That's the whole point of using a lot of subs

Frank

#21 whwang

whwang

    Soyuz

  • *****
  • Posts: 3,705
  • Joined: 20 Mar 2013

Posted 14 August 2015 - 05:33 PM

I wouldn't say quantization noise is part of the read noise or included in it. I'd just say it's another noise term that's always present - and it adds in quadrature with the others.

If you measure read noise using two bias frames - then yes the number you get would include quantization noise - but it's usually a small contribution so people ignore it. But if you know the gain and you know the measured read noise from two bias frames - then you could deduce what the true read noise is. I don't know if anyone does this - but I guess it would be a good thing to do if the read noise is small and comparable to quantization noise.

So it would be

TrueReadSigma = sqrt(MeasuredReadSigma^2 - g^2/12)

where g is gain in e/adu.
 

 

I guess what Jon wants is to prove the existence of quantization noise.  What you suggested above is to calculate read noise by assuming the existence of quantization noise.  Although I do not disagree with the assumption, that's fundamentally different from actually measuring the quantization noise.



#22 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 10,792
  • Joined: 12 Oct 2007

Posted 14 August 2015 - 05:48 PM

Well - if you have 10 electrons per adu, you know you can only measure e counts in steps of 10. The exact way the count gets converted to the nearest step could be by rounding up or down or to the nearest value - but it doesn't really matter as long as it is consistent. The only difference would be a slightly different overall offset or bias in the image.

So if the whole system does a read and that injects read noise - then the electron count is digitized to the nearest step - you know it will contribute a noise term that is a rectangle function - so that all e counts of 15, 16, 17, 18, 19, 20, 21, 22, 23, 24 will get mapped to the value 20. When that happens it corresponds to a noise term with a sigma of 0.29*10 = 2.9e.

If the system doesn't behave this way - then again it's hard to know how the noise terms work - because the model is wrong. If that's the case then it's hard to talk about or know what is going on.

Frank

#23 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 10,792
  • Joined: 12 Oct 2007

Posted 14 August 2015 - 06:45 PM

You can certainly generate and see banding in a sub by dropping the lower bits in the image. If you shift the bits right one bit, the gain value in e/adu will double while the read noise in electrons will be exactly the same - because it's a fundamental term that comes before the analog-digital conversion. The net effect is to make the quantization noise larger compared to read noise - and it should be visible. It won't look like "noise" because it will have a banded appearance - but in terms of adding error to each pixel - it will have a sigma of 0.29g. Every time you shift right one bit - the gain value will double.

And if you shift the raw subs to the right so banding starts to show - and then you calibrate and stack them - the banding should be reduced or even invisible in the final result.

Frank

#24 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,984
  • Joined: 10 Jan 2014

Posted 14 August 2015 - 07:47 PM

I wouldn't say quantization noise is part of the read noise or included in it. I'd just say it's another noise term that's always present - and it adds in quadrature with the others.

If you measure read noise using two bias frames - then yes the number you get would include quantization noise - but it's usually a small contribution so people ignore it. But if you know the gain and you know the measured read noise from two bias frames - then you could deduce what the true read noise is. I don't know if anyone does this - but I guess it would be a good thing to do if the read noise is small and comparable to quantization noise.

So it would be

TrueReadSigma = sqrt(MeasuredReadSigma^2 - g^2/12)

where g is gain in e/adu.

As for banding - it might be visible in a single sub if the gain value is very high in e/adu - but calibration would usually convert it to a floating point image and each calibrated sub would be slightly different - so it would be hard for the banding to show in the final result. But a basic point is - if you don't see banding in the final result - you know it's not a factor.

And regarding the measurement of 1e - you may not be able to measure the impact of 1e in a single sub - but if you have enough subs - the accumulated effect of 1e can rise above the other noise terms and become visible. That's the whole point of using a lot of subs

Frank

 

This assumes you know the actual gain. I guess that might be the case with a CCD. Most of the data available for DSLRs is derived, which would make it pretty much impossible to know the actual gain and thus be able to separate quantization error.



#25 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,984
  • Joined: 10 Jan 2014

Posted 14 August 2015 - 07:49 PM

 

I wouldn't say quantization noise is part of the read noise or included in it. I'd just say it's another noise term that's always present - and it adds in quadrature with the others.

If you measure read noise using two bias frames - then yes the number you get would include quantization noise - but it's usually a small contribution so people ignore it. But if you know the gain and you know the measured read noise from two bias frames - then you could deduce what the true read noise is. I don't know if anyone does this - but I guess it would be a good thing to do if the read noise is small and comparable to quantization noise.

So it would be

TrueReadSigma = sqrt(MeasuredReadSigma^2 - g^2/12)

where g is gain in e/adu.
 

 

I guess what Jon wants is to prove the existence of quantization noise.  What you suggested above is to calculate read noise by assuming the existence of quantization noise.  Although I do not disagree with the assumption, that's fundamentally different from actually measuring the quantization noise.

 

 

It's more that I would like to see if quantization error could ever introduce enough noise to really be a problem under normal circumstances. Is it really a noise, or if you average out real noise terms, does quantization error present differently? Frank mentioned banding...does he mean horizontal and vertical banding, or is he actually referring to posterization? I could see posterization occurring with quantization noise in undithered frames if you were able to average out other noise terms enough. 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics