Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Meaning of "unit" gain?

  • Please log in to reply
15 replies to this topic

#1 Dom543

Dom543

    Apollo

  • *****
  • topic starter
  • Posts: 1,099
  • Joined: 24 Oct 2011

Posted 27 August 2016 - 02:29 PM

Can someone please explain me what the notion of "unit gain" means?

 

Unit in what sense? I read somewhere that "unit gain" for the ASI 1600 is some odd number. Like 139 out of a scale 0 to 500 or something like that?

And why is it that many people like to use exactly this setting for astrophotography?

 

Thank you,

--Dom



#2 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,079
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 August 2016 - 02:50 PM

There are two things that "gain" refers to when discussing cameras like the ASI1600. There is the gain setting, and the literal charge gain

 

In the case of the ASI1600, simple numeric gains are usually gain settings. Gain 0, gain 75, gain 139 (unity), gain 200, gain 300. These are the gain settings in the driver (or in a tool like SharpCap or FireCapture). It's just a setting, it's rather arbitrary, and these particular settings only have meaning for ASI1600. For an ASI178, or the QHY183C, the settings will have different ranges. Unity gain on a QHY139C is 11, for example, out of a range of 0-55. 

 

Then there is the actual gain. Gain is a concept in electronics, where an input voltage is amplified by supplying additional voltage through an operational amplifier circuit, to produce a higher output voltage. As astrophotographers, we are less interested in literal voltage, and more interested in discrete countable units. Fundamentally, we are most interested in photons, however photons are converted to electrons in the sensor, so we count electron charge, or e-. A literal charge gain would be something like 1e-/ADU. This is unity (sometimes called unit) gain. This means that each individual electron in the pixel, is converted into one digital level in the image, by the ADC (analog-to-digital converter). We use the term ADU to refer to Analog to Digital Units, in reference to what the ADC is doing. In the case of 12-bit data, you have 4096 discrete levels, or 0-4095. With unity gain on the ASI1600, at gain setting 139, you would convert 5e- to 5ADU, 1024e- to 1024ADU, and 4095e- to 4095ADU. At a gain setting around 75-77, the literal gain is around 2e-/ADU, converting a range of 0-8191e- to 0-4095ADU. So 5e- becomes 2-3ADU, 1024e- becomes 512ADU, 4096e- becomes 2048ADU, and 8191e- becomes 8191ADU (approximately, depending on the exact offset used, and the particular camera, you might get slightly different results, hence the 75-77 gain setting. Same actually goes for unity gain...at gain setting 139, I actually get around 0.9e-/ADU rather than 1e-/ADU, so there is some slight variation among camera samples.) 

 

Anyway. Gain settings vs. literal gain. The unitless numbers are usually gain settings, and unity gain (1e-/ADU) on the ASI1600 is achieved at a gain setting of 139. 


  • Dom543, futuneral and rlsarma like this

#3 futuneral

futuneral

    Apollo

  • *****
  • Posts: 1,030
  • Joined: 27 Dec 2014
  • Loc: Phoenix, AZ

Posted 27 August 2016 - 03:16 PM

Jon

 

Does this mean that with gain setting above unity we would get some kind of posterization? Like at 500 you would only have output values of 0, 3, 6, 9... So transitions of brightness between pixels will be less smooth. Or there is some kind of "antialiasing" algorithm built into the driver?



#4 Dom543

Dom543

    Apollo

  • *****
  • topic starter
  • Posts: 1,099
  • Joined: 24 Oct 2011

Posted 27 August 2016 - 04:19 PM

Thank you for the prompt and very clear explanation Jon!

 

I fully understand what you are saying and it settles the meaning of the "unit gain".

I am somewhat surprised that the well capacity of the pixels is not playing any role. From point of view of the sensor, and of the photons collected, the ADU is somewhat of an artificial quantity. It is determined by the bit-width of the downstream processing circuitry. I doubt that the maximum well capacity of the sensor would be 4095.

 

Wouldn't it be advantageous to use a gain setting that would associate the maximum well capacity with the maximum possible ADU's? Does this particular gain setting have a name? What numerical value would it correspond to for the ASI1600?

 

Thank you,

--Dom



#5 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,079
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 August 2016 - 04:43 PM

Jon

 

Does this mean that with gain setting above unity we would get some kind of posterization? Like at 500 you would only have output values of 0, 3, 6, 9... So transitions of brightness between pixels will be less smooth. Or there is some kind of "antialiasing" algorithm built into the driver?

 

You will have some quantization noise regardless of the gain, and if you stacked enough into an integer space, you could eventually get posterization regardless.

 

However, you are more likely to get posterization below unity gain, where more than one electron is required for each ADU. At minimum gain on the ASI1600, it takes 4.88e-/ADU. That, as Charles stated, is attenuating the analog signal when it is converted into a digital signal. It "bucketizes" your analog signal. For the first 4-5 electrons, you will get the same value in your image file..1 ADU. You won't be able to differentiate between 1e-, 2e-, 3e-, or 4e-, and some of the time even 5e-. Similarly, the next 4-5 electrons will be the same way. You will be able to differentiate between 4-6 electrons, as you will have 1-2 ADU in your image, however for electrons 5, 6, 7, 8, 9 and maybe 10, they will all convert to 2 ADU. To a degree, read noise will offset some of this posterization, mostly in the lower signal areas, but with an ultra low noise camera, where your deviation is only 3e-, it can only help so much. This is called undersampling, and can result in posterization if you do not stack enough subs and integrate them properly. 

 

It is possible to avoid posterization. By stacking into a 32-bit floating point space with lots and lots of subs. However, the ironic thing about using the lowest gain setting is that you need much longer exposures...so it's much harder top get a lot of exposures. You can expose for as long as possible to make sure you get faint signals above the read noise floor, which will help with SNR, but you could still suffer from some posterization when stacking too-few subs (and you really want to stack 49-64 subs at least to start seeing the benefits with high quantization error.)

 

At unity gain, each electron in the pixel will produce one ADU in the image, so you should have minimal quantization error. If you had a gain of 0.5e-/ADU, then each individual electron would produce two ADU. This is called oversampling, and while you can experience some posterization, it is much easier to use stacking to average out the error and increase the precision of your data when you are at least at unity, and a higher gain helps even more. I think you get around 0.5e-/ADU at a gain setting of 200 on the ASI1600.

 

 

Thank you for the prompt and very clear explanation Jon!

 

I fully understand what you are saying and it settles the meaning of the "unit gain".

I am somewhat surprised that the well capacity of the pixels is not playing any role. From point of view of the sensor, and of the photons collected, the ADU is somewhat of an artificial quantity. It is determined by the bit-width of the downstream processing circuitry. I doubt that the maximum well capacity of the sensor would be 4095.

 

Wouldn't it be advantageous to use a gain setting that would associate the maximum well capacity with the maximum possible ADU's? Does this particular gain setting have a name? What numerical value would it correspond to for the ASI1600?

 

Thank you,

--Dom

 

The ASI1600 is kind of an interesting beast. The sensor that it uses is a fairly integrated sensor. It does not do image processing (well, not much), but it does have all the ADC units on-die. There is one ADC unit per column, which is part of the reason it has such low read noise. Each ADC can operate at a relatively low frequency because of the high paralleleism (i.e. each ADC unit only has to process rowCount number of pixels per read, which is a little over 4000 with the ASI1600...whereas a CMOS sensor that used a single off-die ADC would have to process all 16 million pixels through one ADC...you either keep the ADC frequency low and just have a slow readout speed (which is usually  the case), or you jack the ADC operating frequency up and introduce more read noise). The low ADC frequency on the sensor reduces the amount of analog noise they introduce to the image signal, but still allows fast readout. Same thing Sony Exmor sensors do, same thing the current generation Toshiba sensors do, etc. 

 

Now, the only real downside to the ASI1600 sensor is the ADC units, which are integrated into the sensor, are 12-bit. The native pixel full well capacity is 20,000e-, which is actually pretty good, however it is only usable at gain setting 0, which has a gain of 4.88e-/ADU!! See above reply to Futuneral. :p ZWO doesn't have any option to use a 16-bit ADC with this sensor...they are stuck using the 12-bit ADC. Hence the reason unity is only capable of 0-4095ADU. It's also the reason I looked for and found the gain settings 75-77, which give a more reasonable FWC, but at a gain of 2e-/ADU. Not ideal...but, in practice, it is not so bad. You just need to stack as many subs as you can with 32-bit float to keep the precision high enough to avoid posterization issues.  In my testing, bit depth (at least, I assume...based on how gaps in my 16-bit histograms converge and eventually disappear) scales as the square root of the square root of the number of subs stacked. So, SQRT(SQRT(4)) means with 4 subs, you gain back over a bit. SQRT(SQRT(16)) means you gain back 2 full bits. SQRT(SQRT(81)) means you gain back 3 full bits precision. To gain back the full 4 bits of precision needed to have gapless 16-bit data...you need to stack 256 subs. 

 

Unity gain on the ASI1600, scaled to 16-bit ADU (0-4095e- to 0-65535 ADU), is actually a very high gain. The driver actually writes out the data as 16-bit into a FITS file, so technically speaking, this is what's happening with the ASI1600 (and, from what I can tell, many other ASI cameras). It's 1/16, or 0.0625e-/ADU, in 16-bit terms. That is a ridiculously high gain, so it shouldn't be a surprise that the FWC at unity is only 4095e-! :p It gets even more insane at higher gain. At gain setting 300, the maximum normal gain (beyond that, read noise does not decrease, but noise does, so the sensor must be using some additional means of post-pixel amplification to achieve those gain settings), the 12-bit gain is ~0.15e-/ADU...however scaled to 16-bit, it is an insane 0.009375e-/ADU! That basically means that every sensed photon (Q.E. is about 47% Ha, 52% OIII), is amplified to 106-107 16-bit ADU! :p I was able to see faint Sharpless OIII nebulosity with 100 second 3nm narrow band subs last night while framing and focusing (and I would have been able to sense the core of Sh2-132 with even shorter subs than that...I used 100 second subs just to make sure I knew what I was looking at. :p). The individual subs looked pretty noisy...however, I am kind of curious to see how a high gain narrow band image might look. Dynamic range certainly suffers....it is a little over 9 stops IIRC. However read noise drops to around 1.12e- RMS...you would only need a background sky level of 22e- to swamp read noise. O_o 

 

Anyway...there are the curiosities of the ASI1600 gain settings for you. :p It's an interesting camera. It doesn't deliver ideal mapping of the FWC to a 16-bit output...mostly because it simply cannot. However, it's variable gain offers some intriguing DSLR-like capabilities not usually seen on mono CCD cameras. It offers some ridiculously high gain settings relative to the scaled 16-bit output, which could be a game changer for some things. At high gain (gain setting 300), your data is almost entirely photon shot noise limited. Technically speaking, you should be able to get nearly ideal stacking efficiency by using such a high gain setting. The caveat is, you would need to acquire a TON of subs, and stack a TON of subs, in order to actually get a reasonable final SNR in your integration. 


Edited by Jon Rista, 27 August 2016 - 04:43 PM.

  • Pauls72 and rlsarma like this

#6 Dom543

Dom543

    Apollo

  • *****
  • topic starter
  • Posts: 1,099
  • Joined: 24 Oct 2011

Posted 28 August 2016 - 01:51 PM

Thank you for the in-depth explanations Jon! It is interesting to understand why this sensor has such low read noise and how the various gain settings work.

 

I would have two more question. After that I promise not to distract from B&II.

 

1. If I understand it correctly, at unit gain the maximum number of electrons the system can register is 4095. Does that mean that if there are more than 4095 electrons in a well, then they will be translated to 4095 ADU irrespective of their actual number? If so, that means that only about the lower 20% of the well capacity is used at unit gain.

 

2. Do you happen to know how the various gain levels are actually implemented? Are there analog voltage amplifiers in front of every ADC unit? Or are different gain setting implemented by simply reinterpreting the output of the ADC? To make it clearer what I mean here is an example. Assume that a pixel has an electron count of 12e-. At unit gain that will be translated by the ADC into 12ADU. Now what happens, when we switch to double of unit gain? Will the voltage coming from the pixel be amplified by an analog circuitry before it enters the ADC? Or will the original 12e- enter the ADC but its output will simply be numerically doubled to  be 24ADU for all subsequent purposes.

 

The second question has implications on the type of noise introduced by higher gain settings. If higher gain is implemented by analog amplifiers, then they generate heat. In addition to amplifying the noise already present in the voltage reading of the pixel, they will also add their own thermal type "amplifier noise". In the second case the sensor noise will be doubled numerically and, in addition, a certain kind of quantization noise will be  created. In the example of 2xunit gain no odd numbers will appear as outputs of the ADC. These gaps will be filled in only when stacking multiple frames.

 

Thank you and Clear Skies!

--Dom


Edited by Dom543, 28 August 2016 - 01:54 PM.


#7 pfile

pfile

    Fly Me to the Moon

  • -----
  • Posts: 5,404
  • Joined: 14 Jun 2009

Posted 28 August 2016 - 02:05 PM

i believe in almost all sensors these gains are analog gains; the amplifiers are between the photodiodes and the ADC converters. the whole idea of this amplification is to get the signal up above the noise floor of the ADC circuitry.

 

to your question about pre/post ADC amplification - on DSLR cameras the ISO setting is analogous to gain and on some cameras the super-high ISOs are actually accomplished by setting the analog gains as high as possible, then digitally multiplying the output of the ADC as well. so there's really no point in using those ISOs as they don't really "do anything" that can't be done in postprocessing (and may be harmful as it pre-clips data for you that you can then never recover).

 

rob



#8 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,079
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 28 August 2016 - 02:29 PM

2. I don't know the exact design of the sensor used in the ASI1600. Usually with CMOS sensors, each pixel, or perhaps each column of pixels, will have it's own amplifier. The pixel signal will then be amplified and possibly held in a buffer as an analog signal before the ADC actually handles it. Additional things are also usually done on the analog charge before the ADC gets it. For example, a CDS unit will usually remove reset charge and possibly other measurable dark signal. Integrated dark current suppression technology may either prevent leakage current before it slips through the photodiode, or subtract some of it out much like CDS. All of that usually happens in analog space before the ADC converts anything to digital. 

 

With many sensors, beyond a certain point, the pixel amplifiers cannot amplify enough. I know that Canon cameras and some Sony cameras sometimes employ secondary "downstream" amplification. In the case of Canon cameras, they have pixel amplifiers and secondary amplifiers that are bound to sets of columns. Beyond a certain ISO setting, to actually attain the necessary ISO, the secondary analog amplifier is used to further amplify the signal that was already amplified by the pixel or column amps. That would mean that any noise added by the pixel or column amps would also be amplified by this secondary amplifier. If I had to guess, I would assume that the ASI1600 sensor is doing something similar, which is why beyond a gain setting of about 300, read noise levels off while FWC continues to drop. 

 

So my guess is that the pixel charges are not directly converted by the ADC. My guess is the pixel charges are first amplified into the same 20ke- range of the native full well pixel capacity, and that amplified signal is then converted by the ADC. As pfile said...there is little point in using a gain setting beyond 300 with the ASI1600, unless perhaps when doing more sciency-type things. There might be a use case for highly oversampling star signals for exoplanetary search work or something like that. I don't know all that much about photometry, however in what research I have done, it seems to have fairly specific needs, and I don't think the ASI1600 is going to be well suited to the task. 

 

1. Correct. Any excess in electrons beyond 4095 would result in "clipped" data. This will mostly happen with stars. However, because the gain is so high, and the read noise so low, you won't be using, nor will need to use, very long subs. The dynamic range is still nearly 11.4 stops at unity gain, which is still a good deal more than most DSLRs get at commonly used higher ISO settings. It is close to what a KAF-8300 gets (~11.5 stops) with it's ideally matched gain of ~0.39e-/ADU and 16-bit read out, because it has the much higher 9e- read noise. What might take 1800 seconds with a KAF-8300, might only take 300 seconds with an ASI1600. So long as you still get enough total integration time, the limited FWC is not actually that much of an issue. If you just plan on getting the same amount of integration time...say 15 hours for a single Ha channel, the results will be at least as good. Assuming a photon flux of 0.1e-/s, 300s subs vs. 1800s subs, same integration time:

 

SNRasi = (180 * 30)/SQRT(180 * (30 + 10 + 0.006*300 + 1.53^3)) = 5400/SQRT(180 * (44.14)) = 5400/SQRT(7945.2) = 60.6:1

SNRkaf = (30 * 180)/SQRT(30 * (180 + 60 + 0.02*1800 + 9^2)) = 5400/SQRT(30 * (357)) = 5400/SQRT(10710) = 52.2:1

 

The fainter the faintest signals your after are, the more the lower read noise options the ASI1600 has to offer will shine...and with shorter subs, you shouldn't have to worry about clipping much. (It does happen, but only on the brightest few stars most of the time...and, if you are imaging a region with lots of bright stars, you can drop to a gain of 75-77, double your FWC, and handle it that way.)


Edited by Jon Rista, 28 August 2016 - 11:58 PM.

  • Pauls72 and rlsarma like this

#9 Dom543

Dom543

    Apollo

  • *****
  • topic starter
  • Posts: 1,099
  • Joined: 24 Oct 2011

Posted 28 August 2016 - 10:33 PM

Thank you Jon! I've learned a lot from this thread.

Clear Skies!

--Dom



#10 syyntax

syyntax

    Mariner 2

  • -----
  • Posts: 260
  • Joined: 27 Jul 2016
  • Loc: Acworth, GA

Posted 29 August 2016 - 11:38 AM

Not meaning to hijack this thread, i'll start another if desired.

 

How does one go about determining the unity gain (in my case the ASI178mc)?

 

I've been doing my imaging with the gain setting (in Sharp Cap) upwards of 250 - 400.. the images are very noisy, and my darks don't seem to remove all of this noise effectively.. should I be shooting using unity gain, and just increasing my exposure times to compensate?



#11 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,079
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 29 August 2016 - 12:55 PM

Which setting is unity gain is usually specified by the manufacturer. In the case of the ASI178MC, it appears that the camera does not actually have such a setting:

https://astronomy-im...n1-742x1024.jpg

All of the gain settings according to the gain chart above are higher than unity.

Regarding the notion of "noisy". When using high gain cameras, it is important to understand the difference between types of noise, the noise in an individual sub, and the total noise in an integration of subs of a certain amount of total exposure time.

There are "desirable" types of noise, "undesirable" types of noise, and there are the "avoid at all costs" types of noise. ;) In order:

  • Desirable noise is random noise. Random noise is easy noise. It's "clean" noise, and it tends to have a fairly pleasing aesthetic. Random noise is just a fine grain in your data, it averages down with stacking very easily and very predictably. It is easy to reduce with basic noise reduction tools. This is what you want. The most desirable form of random noise is the Poisson noise from the photon signal itself...the "photon shot noise". The holy grail of imaging would be to have a noiseless camera, leaving you with 100% pure poisson noise. 
  • Undesirable types of noise are read noise and impulsational noise (hot and cold pixels from dark current). Read noise is actually random in most cases, and that is desirable...however it has this one nasty little side effect: it compounds as you stack more and more subs. The only reason sub exposure length actually matters, rather than the total integration time alone, is because of read noise. Every sub you acquire gets one "unit" of read noise. The more subs you have, the more units of read noise you have. When imaging with short subs, you have more total read noise for a given total amount of exposure, than if you use longer subs. With modern cameras, this is becoming less and less of a problem, as read noise drops into the sub 2e- range and even the sub 1e- range in some cases. Below 1e-, read noise rapidly loses it's impact. Hot pixels from dark current are also undesirable, as they can act as loci around which artifacts form during processing, or could cause other problems such as correlated noise (which is an "avoid at all costs" kind of noise!)
  • Finally, the avoid at all costs types of noise. These are your pattern noises. While randomness in noise is pleasing, patterns in noise can be radically annoying. We are very good at pattern recognition, so horizontal or vertical banding, correlated noise, glows, etc. are things that we want to avoid at all costs. Many patterns are fixed, some are semi-random. The problem with fixed patterns is they are reinforced and enhanced by stacking. The good thing about fixed patterns is they tend to be fairly easy to remove with proper calibration. It's the semi-random patterns that we really need to take care of...and dithering along with stacking a lot of subs can help there.

At particularly high gain settings, you could be running into secondary amplifier issues as well. It may be that a gain of 250-400 is too much, and that secondary amplifier is kicking out more undesirable noise (likely, as read noise flattens out after gain 250). You may want to back off the gain a bit, and try a lower setting.

 

In addition to the different classes of desirable vs. undesirable noise, you need to account for total integration time.

 

A short but high gain sub from an ASI178 might look very noisy. However the amount of noise in an individual sub does not really matter all that much if it is mostly a clean and random noise. What matters more is how does a stack of subs of a given amount of total exposure time look? What happens if you stack 121x30s calibrated subs? All that random noise will average down in a very statistical manner, and the noise of your ~1 hour integration will be 11 times less! Because the read noise is only 1.9e-, stacking a lot of short subs is not a big problem...not like it would be if you were stacking 121 subs that all had 9e- read noise in them.

 

At high gain, read noise tends to be lower. In the case of ASI cameras, read noise is around 2e- or lower in almost all cases. The ASI178 does not have a unity gain, as it's lowest gain setting appears to be around 0.9e-/ADU @ 2.2e- read noise. A more ideal gain setting seems to be gain setting 50, which gives you 0.5e-/ADU @ 1.9e- read noise, with 12 stops of dynamic range. This should give you noise that is mostly "desirable" once the frames are calibrated...clean, random noise. It's sampling each electron by a factor of 2x, which is also good. It has a lot of dynamic range at 12 stops, which is good...at gain setting 250, dynamic range drops to about 9.2 stops, and at 400 it is less than 7 stops. Those gain settings might be good for imaging a faint planet, however I think they are probably too high for DSO imaging. My guess is if there is a secondary amplifier, it's definitely being used at those gain settings (not how read noise flattens out once you hit a gain setting of about 275), and that could be why your having trouble calibrating. You do have lower read noise at around 1.4e-...however that is only 0.4e- less than at gain setting 50, which has 1.8e-. Your not going to lose much by using a lower gain setting...and, because of the considerable increase in dynamic range, you will be able to use longer subs...which will likely result in stacking fewer more deeply exposed subs in the end. Since your oversampling each electron, and the camera is 14-bit, you should have good quality high precision data. Stacking only 16 subs should gain you back an additional 2 bits of precision for a full 16-bits. 

 

I think you may just be using the camera incorrectly if  your using it at gain settings 250-400 for DSO. 


  • Pauls72 and rlsarma like this

#12 syyntax

syyntax

    Mariner 2

  • -----
  • Posts: 260
  • Joined: 27 Jul 2016
  • Loc: Acworth, GA

Posted 29 August 2016 - 02:01 PM

Many thanks!

 

a very detailed answer, much appreciated!

 

I think I was more focused on being able to "see something' on the screen pre-processed vs. the results after processing.. I'm assuming that If I can get some long sub exposures, at the 50 gain setting, I should expect to see 'dark' images on the screen, however the data should still be there (given the object I'm shooting registers enough photons).

 

Subsequent processing should be able to reveal the data in a more "clean" fashion? also assuming I use calibration images (darks, bias, flats, etc)



#13 DesertRat

DesertRat

    Fly Me to the Moon

  • *****
  • Posts: 6,266
  • Joined: 18 Jun 2006
  • Loc: Valley of the Sun

Posted 29 August 2016 - 02:20 PM

Thanks Jon for your contribution!

 

In your comparison of the KAF8300 and ASI1600 above you show a SNR advantage for the ASI1600 which may be on the optimistic side.  I think it problematic to compare chips without consideration of the pixel size (and full well capacity).  Further for the cmos based camera I think the noise equation should include a fixed  pattern noise (FPN) contribution.   Maybe the ASI1600 has much less FPN than earlier cmos cameras and its not an issue.  And I don't have a ASI1600 so I cannot say for sure.  However tests I performed on earlier ZWO cmos cameras showed significant fixed pattern noise.  And 'fixed' was not really fixed but a randomized pattern that changed whenever the gain setting was changed and returned to the previous value or any operation that caused a chip reset.  Many of these cmos sensors  have randomizers built on the chip.  Its true that the signal part of the FPN can be mostly subtracted out with a good master bias as long as nothing has changed in the interim.

 

See my study of the ASI120 here:
http://www.cloudynig...-pattern-noise/

 

What the FPN noise figure for the ASI1600MM is I cannot say.  It is not a Sony sensor so I would like to see what others may have found.  The amateur literature seems a little behind technology.  To get good numbers may require researching professional journals, probably not an easy task given the enormity of it.  Of course if you know of any we would like to hear of it.  Its true that CCD's have pattern noise as well but generally its at a much lower level, and the newer Sony cmos sensors are a clear improvement.

 

Clearly in any calibration one should not change anything between lights and darks (or biases).  It would be good to keep a library of master bias frames that one can take differences using pixel math to see if the pattern has changed.  Since your example above is for multiple nights I think calibration should be carefully performed.

 

If I am reading your math correctly above you had a sky flux of 0.033 e-/sec and an object flux of 0.1.  No doubt a low read noise sensor has a big advantage there.  Do you by chance have a database or reference for parameters such as sky flux for dark sky, city sky, narrow band etc, and noise figures for the various sensors?

 

Glenn



#14 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,079
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 29 August 2016 - 04:39 PM

Thanks Jon for your contribution!

 

In your comparison of the KAF8300 and ASI1600 above you show a SNR advantage for the ASI1600 which may be on the optimistic side.  I think it problematic to compare chips without consideration of the pixel size (and full well capacity).  Further for the cmos based camera I think the noise equation should include a fixed  pattern noise (FPN) contribution.   Maybe the ASI1600 has much less FPN than earlier cmos cameras and its not an issue.  And I don't have a ASI1600 so I cannot say for sure.  However tests I performed on earlier ZWO cmos cameras showed significant fixed pattern noise.  And 'fixed' was not really fixed but a randomized pattern that changed whenever the gain setting was changed and returned to the previous value or any operation that caused a chip reset.  Many of these cmos sensors  have randomizers built on the chip.  Its true that the signal part of the FPN can be mostly subtracted out with a good master bias as long as nothing has changed in the interim.

 

See my study of the ASI120 here:
http://www.cloudynig...-pattern-noise/

 

What the FPN noise figure for the ASI1600MM is I cannot say.  It is not a Sony sensor so I would like to see what others may have found.  The amateur literature seems a little behind technology.  To get good numbers may require researching professional journals, probably not an easy task given the enormity of it.  Of course if you know of any we would like to hear of it.  Its true that CCD's have pattern noise as well but generally its at a much lower level, and the newer Sony cmos sensors are a clear improvement.

 

Clearly in any calibration one should not change anything between lights and darks (or biases).  It would be good to keep a library of master bias frames that one can take differences using pixel math to see if the pattern has changed.  Since your example above is for multiple nights I think calibration should be carefully performed.

 

If I am reading your math correctly above you had a sky flux of 0.033 e-/sec and an object flux of 0.1.  No doubt a low read noise sensor has a big advantage there.  Do you by chance have a database or reference for parameters such as sky flux for dark sky, city sky, narrow band etc, and noise figures for the various sensors?

 

Glenn

 

I assumed same image scale in my calculations, in which case the same number of photons would be landing on each pixel regardless. However, that is not necessarily always the case. The pixel area scale factor between these two cameras is 2x.

 

Regarding FPN...I agree it CAN be a problem with some CMOS sensors, however I think a lot if the issues so far may be improper calibration more so than randomization. Additionally, any drift in bias pattern can be taken care of through dithering, as has been demonstrated a few times now with ASI1600 imagers. FPN is not nearly as bad with the ASI1600 as with other ASI cameras (although I've been working with several people through PM who seem to have been using dark optimization without knowing it when calibrating with DSS, as it seems to be enabled by default, and I think that may be a primary reason why so many people are having problems correcting pattern noise with ASI cameras...the dark scaling screwing up the calibration). With the ASI1600, pattern is pretty easily removed with calibration, so I don't think it needs to be accounted for. There is vertical pattern in the bias, but fixed. There is some glow and hot pixels in the dark signal. Calibrating with proper temp-matched darks eliminates all the pattern:

 

Ch0oDmZ.jpg

qebgPI7.jpg

 

I wouldn't be too concerned about that. There is a slight increase in StdDev in the glow areas, but it has not been a problem in practice. Stacking and NR work fine to reduce it. I've also used a cosmetically corrected and convolved master dark to create a mask to reduce noise in the glow areas even further and normalize it out. 

 

I've been reusing a single master dark I created a couple of weeks ago, and so far it still seems to be working. I'm not seeing any remnant patterns in my calibrated narrow band subs, which are just barely skyfog limited. 

 

One of the things about having a very low read noise camera is that, the read noise that is there really doesn't hide anything. If you had 9, 11, 15e- read noise like many of the KAF/KAI sensors, a small amount of glow would be riddled with noise, and might not appear to be as severe as with the ASI1600. Not sure. With only 1.5e- read noise, there is nothing to really hide the glow. STF stretches of dark frames in PI are usually pretty extreme because the noise is so low, that a massive stretch is required to shift the signal to the 1/4 histogram point (which is what STF tries to do). With LRGB frames, where most of the noise is photon shot noise, the glows don't even show up until you get to about 600 second subs (which is ridiculously long for the ASI1600, 60 second L subs are usually overexposed), and no banding is apparent at all in such subs (the banding itself is pretty minimal.) 

 

---

 

Regarding flux. I'm not sure where 0.033e-/s came from...I did rescale the data I originally started out with, as I decided to approximate a dark site rather than my red zone backyard...seems the scaling shifted the skyfog flux levels too low (0.033e-/s would be like a black zone site with pretty small pixels.)

 

Based on measurements from several subs from different images from my dark site, with 150s ISO 1600 subs on a 5D III, I've got background levels (post-calibration, so bias/dark signal offsets and patterns have been removed) of about 45e- for skyfog, and 92e- for moderate object signal (i.e. California nebula). In that case, we've got about a 0.3e-/s flux for background sky, and a 0.32e-/s flux for object (0.62e-/s with skyfog, subtracting the skyfog flux for true object flux; guess those nights were barely green zone with just over a 1:1 Object:LP signal ratio). Scaling for the ASI1600 pixel size, you would have a flux of about 0.11e-/s sky and 0.12e-/s object, and for KAF it's 0.23e-/s sky and 0.24e-/s object. If we did that comparison then, scaling everything relative to the 5D III, and integrating one hour's worth of exposure:

SNRkaf = (32 * 112s * 0.24e-/s)/SQRT(32 * (112s*0.24e-/s + 112s*0.23e-/s + 0.02e-/s*112s + 9e-^2)) 
       = 860.16e-/SQRT(32 * (26.88e- + 25.76e- + 2.24e- + 81e-)) 
       = 860.16e-/SQRT(32 * 135.88e-) 
       = 860.16e-/SQRT(4348.16e-) 
       = 13:1

SNRasi = (2 * (65 * 55s * 0.12e-/s))/SQRT(2 * (65 * (55s*0.12e-/s + 55s*0.11e-/s + 0.006e-/s*55s + 1.53e-^2))) 
       = 858e-/SQRT(130 * (6.6e- + 6.05e- + 0.33e- + 2.34e-)) 
       = 858e-/SQRT(130 * 15.32e-) 
       = 858e-/SQRT(1991.6e-) 
       = 19.2:1

Hopefully that is a more realistic example, based on literal measurements from my own green site subs. Even if we ignored the spacial differences:

SNRasi = (65 * 55s * 0.12e-/s)/SQRT(65 * (55s*0.12e-/s + 55s*0.11e-/s + 0.006e-/s*55s + 1.53e-^2))
       = 429e-/SQRT(65 * (6.6e- + 6.05e- + 0.33e- + 2.34e-))
       = 429e-/SQRT(65 * 15.32e-)
       = 429e-/SQRT(995.8e-)
       = 13.6:1

However, this would be higher resolution data, so the total object signal would be divvied up more finely...and despite that, the ASI1600 still matches the KAF-8300.

 

EDIT: The actual signal levels above may not be exact, as I applied my gain to the 16-bit values rather than 14-bit values. However, scaled to 14-bit, the signals are even weaker, and it doesn't really change the results. Pixel for pixel, the ASI1600 seems to hold it's own relative to the KAF-8300 at the very least. 


Edited by Jon Rista, 29 August 2016 - 05:30 PM.

  • DesertRat likes this

#15 NorbertG

NorbertG

    Explorer 1

  • -----
  • Posts: 86
  • Joined: 02 Mar 2015

Posted 30 August 2016 - 01:58 AM

Jon,  

 

So my guess is that the pixel charges are not directly converted by the ADC. My guess is the pixel charges are first amplified into the same 20ke- range of the native full well pixel capacity, and that amplified signal is then converted by the ADC.

 

Isn´t that what every preamp on a CCD or CMOS does ? Each ADC has a certain voltage range and a certain accuracy (when measured in ADUs. )The higher the gain, the less significant becomes the A/D noise itself (now measured in electrons, which is the relevant unit for SNR)

 

 

btw:

In the ASI user forum I found the  following Q&A:

 

 

This is where I am confused. Isn't it a 12 bit ADC and so shouldn't it be 4096?

 

Reply with quote  

 

Post subject: Re: Max ADU for ASI 1600 MC-C

 

 

yes, but we put the 12bit in the high 12bit of 16bit Format

 

That sounds like a simple bit shift in the firmware or driver. With that method every camera can be driven at unity gain, but it is of course of no change in SNR at all. But it is intersesting to see these numbers when comparing 16bit data of different cameras. 


Edited by NorbertG, 30 August 2016 - 02:42 AM.


#16 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,079
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 30 August 2016 - 10:04 AM

Jon,  
 

So my guess is that the pixel charges are not directly converted by the ADC. My guess is the pixel charges are first amplified into the same 20ke- range of the native full well pixel capacity, and that amplified signal is then converted by the ADC.


 
Isn´t that what every preamp on a CCD or CMOS does ? Each ADC has a certain voltage range and a certain accuracy (when measured in ADUs. )The higher the gain, the less significant becomes the A/D noise itself (now measured in electrons, which is the relevant unit for SNR)


Usually. However there are some curious designs these days as they pack more and more of the technology onto the sensor die. I have no idea what the actual architecture of the ASI1600 sensor is, so I don't know for sure how it works. I've seen patents on sensors that embed an ADC directly into every shared pixel group even. With CMOS, who knows for sure unless you have a datasheet.

Edited by Jon Rista, 30 August 2016 - 10:04 AM.



CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics