Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

ASI1600mm-c Cheat Sheet - No math

This topic has been archived. This means that you cannot reply to this topic.
95 replies to this topic

#1 TimN

TimN

    Skylab

  • *****
  • Moderators
  • topic starter
  • Posts: 4,115
  • Joined: 20 Apr 2008

Posted 14 December 2016 - 09:15 AM

The cheat original sheet is more than a little wrong. The numbers do not have the bias offset included. Unfortunately the 512 for unity gain has been referenced  on other threads without indicating the bias offset was missing in the calculation. So, this would be the new cheat sheet with the bias offsets included. I used some of the more common gains and offsets. I just wish I could still edit the original post for those that don't read to the end of threads. Again, anyone see any issues please post.

Optimal Exposure:

Median ADU shown in SGP -
Gain 0 Offset 10:         400 ADU
Gain 75 Offset 12:       550 ADU
Gain 139 Offset 21:     850 ADU
Gain 200 Offset 50:    1690 ADU
Gain 300  Offset 50 :  2650 ADU
Better to be a little high than low
For other Gains or offsets use the formula Jon indicated: MinDN16 = (((ReadNoise * 20) / Gain) + BiasOffset) * 2^16/2^Bits

Dithering:

Just pick "X" for dithering every "X" frames to be 5% of the number of subs you are going to collect. If you are collecting 20 subs, you should dither every frame. If you are collecting 50 subs, dither every 2 frames (OK the rule should say "round down"), etc.

Calibration Frames:
             Flats:
                      In SGP you want your flats to measure 12k-18k DN.
             Bias:
                      Use Dark Flats instead of Bias if Flats long.

Get new calibration frames when changing driver.



#2 Midnight Dan

Midnight Dan

    James Webb Space Telescope

  • *****
  • Posts: 15,766
  • Joined: 23 Jan 2008

Posted 14 December 2016 - 09:59 AM

Great cheat sheet!  

 

The only thing I'd question is the optimal exposure level.  I'm certainly no expert on this stuff, but it seems to me that the optimal exposure will depend to some degree on your level of light pollution.

 

If you have significant light pollution, the background level will be higher.  If you shorten your exposure to maintain the same DN in SGP as someone in a darker site, you're also pushing your target data downscale into fewer ADU buckets, which will increase quantization errors.  

 

Maybe Jon Rista can chime in here and tell me if I'm misunderstanding this? 

 

-Dan



#3 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 14 December 2016 - 10:08 AM

You only need to expose to a certain background level, regardless of where you are. What will change when under bright skies is it will simply take you less time to get to that level, meaning you will need to stack more subs for the same integration time. It might take you 3 minutes to get there at a dark site, but only 30 seconds to get there under bright skies. In the end, though, you DO want to aim for the same level (and, as Tim said, better to overshoot it a bit than undershoot.) 

 

Also keep in mind...with bright skies, you ultimately need more integration time to average out the noise from all that excess light. You might only need two or three hours from a dark site...but you could need at least 10, 15, more from a light polluted site.



#4 hfjacinto

hfjacinto

    I think he's got it!

  • *****
  • Posts: 18,949
  • Joined: 12 Jan 2009

Posted 14 December 2016 - 10:49 AM

You only need to expose to a certain background level, regardless of where you are. What will change when under bright skies is it will simply take you less time to get to that level, meaning you will need to stack more subs for the same integration time. It might take you 3 minutes to get there at a dark site, but only 30 seconds to get there under bright skies. In the end, though, you DO want to aim for the same level (and, as Tim said, better to overshoot it a bit than undershoot.) 

 

Also keep in mind...with bright skies, you ultimately need more integration time to average out the noise from all that excess light. You might only need two or three hours from a dark site...but you could need at least 10, 15, more from a light polluted site.

Jon,

 

Part of this confuses me because its not my experience with the SBIG 8300. I image from a white zone, I found that taking longer subs just moves the histogram but also lets me go deeper. Why would it be different with the ASI 1600. If you can do longer subs why not turn off gain and do longer subs?



#5 FiremanDan

FiremanDan

    Aurora

  • *****
  • Posts: 4,980
  • Joined: 11 Apr 2014

Posted 14 December 2016 - 11:04 AM

I have noticed if I way over expose I can get a lot of that faint stuff to show up with minimal stretching, but I think at that point, besides getting clipped stars, you also tend to lose dynamic range. Which from my understanding is less of an issue on a lot of CCDs but these CMOS sensors are easier to lose that dynamic range. 

I now yield the floor to the smart people who actually know what they are talking about. 



#6 TimN

TimN

    Skylab

  • *****
  • Moderators
  • topic starter
  • Posts: 4,115
  • Joined: 20 Apr 2008

Posted 14 December 2016 - 11:11 AM

To add to what Dan said, I believe that you can get more faint detail by shooting more images or longer images - its total exposure time that counts. I think this is the optimum exposure for this camera without losing bright detail such as clipping stars.



#7 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 14 December 2016 - 12:24 PM

 

You only need to expose to a certain background level, regardless of where you are. What will change when under bright skies is it will simply take you less time to get to that level, meaning you will need to stack more subs for the same integration time. It might take you 3 minutes to get there at a dark site, but only 30 seconds to get there under bright skies. In the end, though, you DO want to aim for the same level (and, as Tim said, better to overshoot it a bit than undershoot.) 

 

Also keep in mind...with bright skies, you ultimately need more integration time to average out the noise from all that excess light. You might only need two or three hours from a dark site...but you could need at least 10, 15, more from a light polluted site.

Jon,

 

Part of this confuses me because its not my experience with the SBIG 8300. I image from a white zone, I found that taking longer subs just moves the histogram but also lets me go deeper. Why would it be different with the ASI 1600. If you can do longer subs why not turn off gain and do longer subs?

 

Beyond a certain point, it wouldn't let you go deeper. Once you sufficiently swamp the read noise, then you are effectively shot noise limited. Longer exposures mean nothing, because stacking fewer long subs vs. stacking short subs that are long enough to swamp the read noise results in a minuscule and usually meaningless difference in the final SNR. 

 

With the ASI1600, more subs is better than few subs, because it's 12-bit. We need to stack a good number of subs in order to recover bits, so using the longest subs possible can actually be a detriment. This is especially true at Gain 0, which has ~1.4e- quantization noise alone, which is pretty significant (more than any other camera I have ever used or processed data from.) 

 

The difference between the 8300 and 1600 is the 8300 has ~9e- read noise...the 1600 has at most 3.5e- read noise. With the 8300, you might well require longer subs to fully swamp that read noise. We simply do not need long exposures with the ASI1600. In my back yard, a 60-second Gain 0 Lum exposure swamps the read noise by about a factor of 70x or so. That is WAY beyond what is actually necessary, where a background signal of 20x the read noise is usually plenty sufficient and will get you well into the 90% stacking efficiency range. 


Edited by Jon Rista, 14 December 2016 - 12:28 PM.


#8 mikefulb

mikefulb

    Surveyor 1

  • *****
  • Posts: 1,906
  • Joined: 17 Apr 2006

Posted 14 December 2016 - 04:55 PM

I concur with everything Jon said - if you want a cheat sheet you need to work into it a way to choose the gain level that will give you a low enough read noise so you can use short enough exposures (and still swamp read nosie) to get 50+ subs (really too few but a starting point) in your total allotted exposure time so you can recover bit depth.

 

Sounds confusing!

 

We need something like those fancy slide rules in WWII to compute the optimal settings...

 

https://en.wikipedia.org/wiki/E6B

 

:lol:

 



#9 okiedrifter

okiedrifter

    Mariner 2

  • -----
  • Posts: 261
  • Joined: 26 Apr 2007

Posted 14 December 2016 - 07:21 PM

Whoa..looks like I have been way overexposing for some time now. This is an LRGB image I shot a few week ago.

Shot at unity gain, SGP is showing a mean of 736 with a blue filter and a whopping 2368 with the Lum filter.

Attached Thumbnails

  • M31.jpg


#10 FiremanDan

FiremanDan

    Aurora

  • *****
  • Posts: 4,980
  • Joined: 11 Apr 2014

Posted 14 December 2016 - 08:44 PM

Whoa..looks like I have been way overexposing for some time now. This is an LRGB image I shot a few week ago.

Shot at unity gain, SGP is showing a mean of 736 with a blue filter and a whopping 2368 with the Lum filter.

Looks pretty good though! I think this Jon guy is just making stuff up!  :lol:

 

I had been doing the same thing. I went to look at my numbers and had some like with more digits than yours! Oops! 



#11 Midnight Dan

Midnight Dan

    James Webb Space Telescope

  • *****
  • Posts: 15,766
  • Joined: 23 Jan 2008

Posted 14 December 2016 - 08:44 PM



You only need to expose to a certain background level, regardless of where you are. What will change when under bright skies is it will simply take you less time to get to that level, meaning you will need to stack more subs for the same integration time. It might take you 3 minutes to get there at a dark site, but only 30 seconds to get there under bright skies. In the end, though, you DO want to aim for the same level (and, as Tim said, better to overshoot it a bit than undershoot.) 

 

Also keep in mind...with bright skies, you ultimately need more integration time to average out the noise from all that excess light. You might only need two or three hours from a dark site...but you could need at least 10, 15, more from a light polluted site.

Ok, here's what I'm not understanding.  I drew up the graphs below to illustrate my thinking. They are somewhat exaggerated to better show the effect

 

Screen%20Shot%202016-12-14%20at%208.31.0

 

In the first one, you're at a low light pollution site and the vertical lines indicate the range of brightness values for your target.  It is buried in the background peak, but since the skies are relative dark, it is on the right side of it.

 

The second graph shows the same target and the same exposure settings, but with more LP.  The target has the same photon flux so its brightness range has not changed, but the LP has moved the exposure peak to the right so the target is more buried in it.

 

The 3rd graph shows what happens when you reduce the exposure of the second graph to get the same level as you would have gotten at the darker site.  Now the target's brightness range is narrowed substantially.  There will be a smaller range of ADU values to represent it, there will be more quantization errors, and there will be more posterization in the data when it is stretched enough to bring out the image.

 

I understand that stacking a lot of subs will restore some of the lost bits and provide more bit-depth resolution, but there's a point of diminishing returns on that.  And when you only have 4096 levels to start with, I would think you'd want to avoid shoving your target data into a rather small number of ADU buckets.

 

-Dan



#12 hfjacinto

hfjacinto

    I think he's got it!

  • *****
  • Posts: 18,949
  • Joined: 12 Jan 2009

Posted 14 December 2016 - 09:21 PM

Still makes no sense to me, what does it matter where the histogram is as long as I'm not clipped? Even if I clip bright stars does it really matter?

#13 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 14 December 2016 - 09:27 PM

Dan,

 

I'm honestly not sure I understand what you are doing with the third graph. Why would you do that?

 

You reduce the deviation of the signal (represented by the width of the histogram around the mean, which is roughly where it peaks for most astro signals) by stacking. Stacking results in a reduction in noise. Noise is the thing that CAUSES the signal to deviate (vary) in the first place. So stacking allows you to "reign in" the deviations, reduce them in both count and extent. With the high LP data, you simply need to stack more to narrow the deviation enough to compare to the deviation from a dark site.

 

Via stacking, the High LP histogram will not actually change position...it will simply narrow "in place". You then correct the brightness offset with a simple subtraction. 


Edited by Jon Rista, 14 December 2016 - 09:27 PM.


#14 jpbutler

jpbutler

    Apollo

  • *****
  • Posts: 1,150
  • Joined: 05 Nov 2015

Posted 15 December 2016 - 12:43 AM

Still makes no sense to me, what does it matter where the histogram is as long as I'm not clipped? Even if I clip bright stars does it really matter?

 

I see this as a best practices pattern for primarily the ASI1600. The 8300 has a somewhat different set of criteria.

 

Because of the fact that we are dealing with only 12 bits of resolution in the adu, and benefiting from the low read noise, it behooves us to maximize the dynamic range of the camera by staying to the left side of the histogram as much as practicable.

Then use the gain in extra exposure time to increase the number of subs to the largest  amount we are comfortable with to a max of around 80 -120 subs in order to increase the S/N and gain back some of the lost precision.

 

John


Edited by jpbutler, 15 December 2016 - 12:44 AM.


#15 NorbertG

NorbertG

    Explorer 1

  • -----
  • Posts: 86
  • Joined: 02 Mar 2015

Posted 15 December 2016 - 04:35 AM

,

 

With the ASI1600, more subs is better than few subs, because it's 12-bit. We need to stack a good number of subs in order to recover bits, so using the longest subs possible can actually be a detriment. This is especially true at Gain 0, which has ~1.4e- quantization noise alone, which is pretty significant (more than any other camera I have ever used or processed data from.)

well, you never had a 8bit camera ? Give you an example with a TIS/DMK at low gain. Shows a similar gain readnosie dependence like the ASI but at a level 20-30 times higher. Compared a single exposure of 200 msec with a stack of 2000 taken at 1/10000 msec. Single Frame of a stack looks totally dominated by A/D noise alone and shows dramatic posterization. Result is still dominated by readnoise, but it is of course an extreme example.

stacking extreme comments.jpg

Why do I show this ? Actually I should add some more stacks with slightly more exposure time. What becomes obvious is that there is no discrete step for a minimum exposure time, where everything is bad if you stay below. If one fails to bring the histogram to the "recomended" range, it might take longer for the total stack, but it could still lead to a decent result with slightly more exposure time.

On the other hand, dividing a non-saturated exposure into more subs, will not give any benefit in terms of "more bits = more precision". The upper bits are not "filled". Of course it makes sense as long as a readnoise drops significantly with  higher gain which can be applied for the shorter subs.

(again: the discussion is a little bit academic (or useless), since for some people shorter subs are simply essential for other reasons like guiding Errors, seeing and OTA seeing and so on.   

 

 

 

 

 

 

 

 

 

 

  



#16 Midnight Dan

Midnight Dan

    James Webb Space Telescope

  • *****
  • Posts: 15,766
  • Joined: 23 Jan 2008

Posted 15 December 2016 - 09:28 AM

Dan,

 

I'm honestly not sure I understand what you are doing with the third graph. Why would you do that?

 

You reduce the deviation of the signal (represented by the width of the histogram around the mean, which is roughly where it peaks for most astro signals) by stacking. Stacking results in a reduction in noise. Noise is the thing that CAUSES the signal to deviate (vary) in the first place. So stacking allows you to "reign in" the deviations, reduce them in both count and extent. With the high LP data, you simply need to stack more to narrow the deviation enough to compare to the deviation from a dark site.

 

Via stacking, the High LP histogram will not actually change position...it will simply narrow "in place". You then correct the brightness offset with a simple subtraction. 

Sorry, guess I'm not being clear. :grin:

 

Your premise is that you adjust the exposure to achieve a particular target mean value in SGP regardless of the LP.  Is that correct?

 

The mean value in SGP is roughly the top of the peak seen in the histograms.  The first histogram above represents the peak being at the correct position for the target mean value in SGP as calculated by you, based on the read noise level and being far enough above it to swamp the read noise.  So the first histogram is a "correct" exposure in a relatively dark site.

 

In the second histogram, the LP level has gone up so if you use the same exposure settings, the peak moves to the right and the SGP mean value goes up.  As a result, to keep the mean SGP reading at the same level regardless of LP, you'd have to reduce the exposure as shown in the third histogram.  So when you ask "Why would you do that?" it's because you said I should, unless I'm misunderstanding what you're saying, which is entirely possible! :grin:

 

The target, let's say a nebula in this case, has a range of brightness levels, or photon flux.  There are darker areas and brighter areas of the target.  The vertical lines are meant to represent that range of brightness levels.  Much of the sky will be darker than it (ideally), hence the histogram peak to the left of it, and the stars will be brighter than it, hence the small tail of the histogram going all the way to the right.

 

But the point is, if you reduce your exposure too far due to LP, you'll be reducing the brightness range of the target nebula so that it covers fewer ADU buckets.  When you then stretch it to get that back, you'll have quantization error noise, plus a greater risk of posterization.

 

-Dan



#17 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 10:24 AM

For an individual sub, you are partially correct. You are incorrect for dithered and stacked subs. Dithering IS critical here, as you must dither in order to randomize pattern, however with dithering and stacking, you will average out the quantization error along with all the other noise and FPN, allowing you to recover sub-LSB details. You can expand your bit depth considerably if you stack enough frames...however beyond the first two to three bits, it starts to take a LOT of subs. 

 

All I said was it takes a shorter polluted exposure to get the signal peak to the same position as a dark site sub. I also said all you need is to swamp the read noise with the background sky by 20x to sufficiently swamp it. I never said anything about NOT sufficiently swamping the read noise, which seems to be where you may be misunderstanding. If you are swamping the read noise with a dark site signal, then you would certainly be swamping it with a bright site signal as well, if both signals peaked at the same place. The only thing that would differ is the object signal strength...however, with so much noise from LP, quantization error is not the biggest concern. 

 

Now, regarding LP. At least where I live, I have trouble getting exposures that are "short" enough from a 20x standpoint. I have to find a balance between short enough and not too long. I image with NB filters at a high gain, and I use two to three minute exposures. With LRGB, though, I image at minimum gain and use 30-60 second exposures. A 30 second exposure, despite being "short", is still PLENTY of exposure. I go by the 20x/5% guideline. The idea is to expose your background sky level to the point where it is 20x the read noise. Well, depending on the camera, what that exposure is will differ, because it is read noise dependent. A KAF-8300 has ~9e- read noise, which means you should aim for a 180e- background sky level. An ASI1600 has at most 3.5e- read noise at minimum gain, so you only need a 70e- background sky level. At unity gain, you only have 1.55e- read noise...so you only need  a 31e- background sky level. 

 

In my back yard, I hit a 70e- background sky level in 30 seconds easy. I can top 250e- in 60 seconds. These exposures are more than sufficient. There is no issue with quantization error here, no issue with read noise. Both have been totally and completely swamped by shot noise from sky signal. I could expose for longer...but I'm already well into the 90% stacking efficiency range...so why expose for longer? Longer exposures are detrimental in a few ways. For one, the longer the exposure, the more you risk in integration time loss if a sub must be tossed. Longer exposures allow information to be blurred more due to seeing, tracking error, wind, etc. Longer exposures run the risk of clipping more information. Longer exposures do not help you recover bit depth, as the only way to do that is to stack. 

 

Remember the ASI1600 is an ultra low read noise camera. A KAF-8300, on the other hand, is not. The ASI1600 is a 12-bit camera, while the KAF-8300 is a 16-bit camera. Also remember that it is not the difference between 3.5e- and 9e-...it is the difference between 3.5^2 and 9^2, or 12.25 vs. 81. The rules that fundamentally governed exposure with classic high bit depth CCD cameras are not the same rules that fundamentally govern exposure with a modern low noise, lower bit depth CMOS camera. Low noise, lower bit depth CMOS cameras don't need long exposures to swamp the read noise and quantization error. However, they do need more stacking to recover bits, and that is benefitted by using shorter exposures. 


Edited by Jon Rista, 15 December 2016 - 10:43 AM.


#18 rjbokleman

rjbokleman

    Ranger 4

  • *****
  • Posts: 314
  • Joined: 04 Aug 2014

Posted 15 December 2016 - 10:40 AM

Maybe I'm in the same camp as FiremanDan here trying to follow this thread.  :lol:

 

I boarder between an Orange and Red zone for LP.  If I'm going after a rather dim (M33) target I will use G139/O21.  If I'm going after a relatively bright (NGC 7023, M13) target I'll use G75/O12-15 or sometimes even O21.

 

In either case my test shots for exposure length are not concerned at all about the median ADU in SGP.  Instead I'm looking to expose to where no single star center is over 65504 (clipped) and the brightest stars come in around 62-64 at most.

 

That said, my Flats have been coming in around 24-26K...and now I'm wondering why the recommendation is well below that when the exact middle is 30K?


Edited by rjbokleman, 15 December 2016 - 10:44 AM.


#19 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 10:51 AM

 

,

 

With the ASI1600, more subs is better than few subs, because it's 12-bit. We need to stack a good number of subs in order to recover bits, so using the longest subs possible can actually be a detriment. This is especially true at Gain 0, which has ~1.4e- quantization noise alone, which is pretty significant (more than any other camera I have ever used or processed data from.)

well, you never had a 8bit camera ? Give you an example with a TIS/DMK at low gain. Shows a similar gain readnosie dependence like the ASI but at a level 20-30 times higher. Compared a single exposure of 200 msec with a stack of 2000 taken at 1/10000 msec. Single Frame of a stack looks totally dominated by A/D noise alone and shows dramatic posterization. Result is still dominated by readnoise, but it is of course an extreme example.

attachicon.gifstacking extreme comments.jpg

Why do I show this ? Actually I should add some more stacks with slightly more exposure time. What becomes obvious is that there is no discrete step for a minimum exposure time, where everything is bad if you stay below. If one fails to bring the histogram to the "recomended" range, it might take longer for the total stack, but it could still lead to a decent result with slightly more exposure time.

On the other hand, dividing a non-saturated exposure into more subs, will not give any benefit in terms of "more bits = more precision". The upper bits are not "filled". Of course it makes sense as long as a readnoise drops significantly with  higher gain which can be applied for the shorter subs.

(again: the discussion is a little bit academic (or useless), since for some people shorter subs are simply essential for other reasons like guiding Errors, seeing and OTA seeing and so on.   

 

The primary issue with the stacked image is not posterization. It is FPN. By far, FPN is the single largest issue with that stacked image. Quantization error will average out with stacking...however with undithered subs, FPN will NOT average out, and it will become more visible the more subs you stack. Your 2000 sub stack is likely fully limited by FPN, which probably kicked in well before the 2000 sub count and is limiting SNR. I would offer that if you dithered those 2000 subs, even sparsely, that once stacked you would no longer see FPN, or at the very least, it would not be nearly as clear and would not limit your SNR as much. 

 

FTR, FPN is not just bands and glows. FPN is fundamentally due to non-uniform response of the pixels to photonic and dark signals. The posterized look in this image is most likely due to the per-pixel FPN that arises due to that non-uniform response. Since no dithering was used, that spatially random pattern, combined with the vertical banding, was effectively "frozen" into the image. You might be surprised what even dithering every 100 frames does for a stack like that, let alone every 10 frames. 

 

Oh, and the notion that "upper bits are not filled". That is very false. Signal combines linearly in a stack, while noise combines logarithmically. When you stack, you are ADDING signal. Let's say that the signal of your subs is half a bit. You stack 2000 subs, you end up with 2000*0.5e-, which is 1000e-! That is a very NON-trivial signal. That is a HUGE signal. Let's say you had a 1e- quantization error:

 

SNR = (2000*0.5e-)/SQRT(2000 * (0.5e- + 1e-)) = 1000e-/SQRT(2000 * 1.5e-) = 1000e-/SQRT(3000e-) = 1000e-/54.8e- = 18.3:1

 

That is a good SNR. Assuming you did not have FPN, that would be a perfectly usable SNR. Lets say you had 5% PRNU. Photon transfer theory states that once your signal tops 1/PRNU^2, you are FPN limited. Well, 1/0.05^2 = 400e-, which would mean the signal in your 2000 frame stack was certainly fully limited by FPN. At 4% PRNU, you would be able to get to 625e- before FPN limited you. At 3% PRNU, you would be good until 1111e-, however at 1000e-, you are still going to be largely limited by FPN, certainly more than by any other noise. Given how strong the pattern is in the image above, I would say the PRNU is probably higher than 3%. 

 

Oh, and one final thing. Comparing a single 200 MILLIsecond sub to a stack of 2000 0.1 MILIsecond subs is FAR from what I've been talking about. I have been talking about exposures that are deep enough to properly swamp read noise. A single 200ms sub is about enough to capture a slightly blurred speckle pattern of a star, and nowhere remotely close enough to capture any faint DSO details. Let alone a 0.1ms sub. You were working with a photographic test chart, which is clearly significantly brighter than a DSO. Now, your 200ms sub appears to properly swamp the read noise...however, your 0.1ms subs are not. My guess is you did NOT change the amount of light illuminating your chart between the two exposures. That does not in any way model what I've been talking about. 

 

I have been talking about the difference between dark skies and bright skies. Let's say a longish L exposure of 300 seconds under dark skies of 21.3mag/sq" gives you a signal of 70e-. Now lets say you move back home, where your skies are 18.5mag/sq". The difference in sky brightness between those two sites is ~15x. There are 15 TIMES more photons reaching the sensor in your back yard, than at the dark site. That means instead of a 300s exposure, you now only need a 20s to get THE SAME 70e- signal. That 70e- signal is going to swamp the read noise, which is still only 3.5e- (or broken down into read noise and quantization noise, 3.2e- & 1.4e-). At the dark site, you have a background SNR of 70e-/SQRT(70e- + 3.5^2), and in your back yard you have a background SNR of 70e-/SQRT(70e- + 3.5^2). 

 

The above example with the photographic test chart is not modeling this scenario, because when you switch to 0.1ms exposures, you are not compensating for a concurrent INCREASE in the photon flux. Your photon flux remains the same, and therefor the 0.1ms exposures are most assuredly not swamping the read noise. 

 

Well...I just dropped some math in a no-math thread. So I'll stop here. :p


Edited by Jon Rista, 15 December 2016 - 11:57 AM.


#20 telfish

telfish

    Mercury-Atlas

  • *****
  • Posts: 2,649
  • Joined: 17 Nov 2010

Posted 15 December 2016 - 12:39 PM

Jon, can you post your gain/offset recommendations here. I searched for them but can't find them.



#21 Midnight Dan

Midnight Dan

    James Webb Space Telescope

  • *****
  • Posts: 15,766
  • Joined: 23 Jan 2008

Posted 15 December 2016 - 02:20 PM

 For an individual sub, you are partially correct. You are incorrect for dithered and stacked subs. Dithering IS critical here, as you must dither in order to randomize pattern, however with dithering and stacking, you will average out the quantization error along with all the other noise and FPN, allowing you to recover sub-LSB details. You can expand your bit depth considerably if you stack enough frames...however beyond the first two to three bits, it starts to take a LOT of subs.

 

Yep, I understand what dithering and stacking buys me.  But if you dither and stack the same number of subs with all 3 of my graphs above, you still end up with the target's brightness values spanning fewer ADU counts (or SGP counts, doesn't matter) in the 3rd graph than you would in the 2nd.  

 

Whether you end up with enough of a span to avoid posterization will depend on the initial range and how much you stretch it.  But you will certainly reach that point sooner with graph 3 than with graph 2.

 

 All I said was it takes a shorter polluted exposure to get the signal peak to the same position as a dark site sub. I also said all you need is to swamp the read noise with the background sky by 20x to sufficiently swamp it. I never said anything about NOT sufficiently swamping the read noise, which seems to be where you may be misunderstanding.

 

I'm looking at your statement in your first post on this thread:

 

 You only need to expose to a certain background level, regardless of where you are. What will change when under bright skies is it will simply take you less time to get to that level ...

 

So this says to me that if my 1st graph was a perfect exposure (peak=20x read noise, no star clipping), then when I move to the site with more LP, I should take shorter exposures to get to that same background peak level - which is what I'm showing in graph 3.  Is that a misinterpretation of your comment?

 

-Dan



#22 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 03:06 PM

Dan, I would change your graphs so that all three looked like the second one. :p You clearly denoted the minimum necessary target brightness range. However your dark site exposure is BELOW that range...I don't understand why. If you properly expose two subs one from a dark site and one from a bright site to the same 20x level, then the histograms for both of those subs should fall smack into the middle of the target brightness range, and both should have similar standard deviations. A short light polluted sub is not going to have a significantly larger deviation than a long dark site sub. Both are effectively the same EXPOSURE...it's simply that the two achieved the same exposure in different exposure TIMES. What you are effectively depicting now is two exposures of the same TIME, thus leading to the dark site exposure being UNDEREXPOSED. That is not what I am espousing. It is effectively the opposite of what I am espousing. 

 

There should be no real difference in two exposures that both reach the 20x criteria. If your background sky signal (total photon signal) is 20x the read noise, it's 20x the read noise. Doesn't matter where those photons come from, you've met the criteria for a sufficiently exposed sub.

 

As for object signal. Object signal in a short sub will be smaller than Object signal in a long sub. However once your TOTAL signal in each individual sub reaches the 20x criteria, there is no real value to increasing exposure further. You are shot noise limited. The only way you can improve your OBJECT signal at that point is to increase your total integrated exposure. You could use longer subs, however doing so means you are throwing away dynamic range. That is why we stack. Longer exposures or stacking, once you are shot noise limited, there is no real difference between the two, other than longer exposures suffer increasingly from other issues...increased blurring, increased risk of loss, etc.

 

Now, light pollution adds noise. Two hours of integrated exposure from a dark site, assuming you used optimal sub exposure lengths for the dark site (longer), is going to be significantly better than two hours of integrated exposure from a bright site, assuming you used optimal sub exposure lengths for the bright site (shorter). Increasing the sub exposure length from the bright site is not going to change the fact that two hours is simply not enough. You are still getting extra photons from light pollution, and those photons are adding noise to your image. The only way to combat that noise is to get more total integrated exposure. If your bright site is 15x brighter than your dark site, then you would need no less than THIRTY HOURS of bright site data to overcome that LP. Sadly, that is still not enough, because the total dark current in 2 hours is significantly less than the total dark current in 30 hours...so you'll need even more total integration time to actually have a truly equivalent result from the bright site. Depending on the amount of dark current you have...that might be 35 hours...it might be 60 hours, it might be 100 hours. And the longer your total integration gets, the more shot noise from LP and dark current you will have to overcome. Furthermore, FPN even with effective dithering WILL eventually put an upper limit on how much SNR you can get via stacking...so, if you end up stacking 3600 60-second bright site lights, my guess is you'll be running into FPN limitations and...well, it's kind of game over at that point. It's an insidious spiral that you can never really overcome... :p



#23 jpbutler

jpbutler

    Apollo

  • *****
  • Posts: 1,150
  • Joined: 05 Nov 2015

Posted 15 December 2016 - 03:55 PM

Kind of trying to test my understanding here.

 

I assume that this 20x read noise suggestion is only relevant for the ASI1600.

It can be used as a valid minimum for any camera, but from what I have experienced, even at gain 75 I am already clipping the brighter stars in this camera. So, let's try and save as much dynamic range as possible.

 

Now let's say that I somehow masked off the bright stars with some kind of physical mask on the front of my telescope and wanted to increase the amount of signal that I get from a nebula, say the ghost nebula.

I could extend my exposure time and go beyond this self imposed limit of 20x read noise. Knowing that I was going to saturate any remaining stars in order to better capture the dynamic range of the nebula.

 

Or let's say that Sam came out with a 16bit camera with a well depth of 50k -e and the same gain and read noise characteristics as the ASI1600. We could come up with a recommended value for exposure, but it would not be 20x read noise.

 

Is this sound logic or am I missing something?

 

John



#24 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 25,597
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 05:35 PM

Why wouldn't this hypothetical 50ke- camera have a recommended exposure value different than 20x the read noise? This guideline doesn't care about the FWC. It is FWC agnostic. It only cares about the read noise. It works for any camera. This is because once you have sufficiently swamped the read noise, further exposure has greatly diminished and further diminishing value, in the face of mounting detriments. It doesn't matter if the FWC is 10k, 20k or 50k...if you have 1e- read noise, then you only need a background sky of 20e- to sufficiently swamp the read noise. Consider that a pure signal of 20e- affected only by it's own intrinsic shot noise has an SNR of 4.47:1, while a 20e- signal that also has 1e- read noise has an SNR of 4.58:1. This is a difference of 0.2dB. That is meaningless.

 

If you are not clipping anything once you reach the 20x point, then you could certainly continue to expose. However, the longer your exposures, the more blur they will experience due to seeing, tracking and environmental effects (i.e. wind). The longer your exposures, the more you risk when you have to toss a sub. Such risks can be mitigated by spending more money...and if you have a highly reliable $10k (or more expensive) mount, then you might have little issue with using longer exposures because you may never have to toss any. However at some point, you are going to start clipping information.

 

Additionally, bit depth has very, very little to do with when you clip data...and using FWC as a gauge can be quite misleading. FWC is mostly related to photodiode area. Larger pixels generally have larger FWCs, smaller pixels generally have smaller FWCs. Smaller pixels can sometimes gain ground in terms of FWC when smaller transitors are used, and that is a large part of the reason why an ASI1600 with 3.8 micron pixels has most of the FWC capacity of a KAF-8300 with 5.4 micron pixels. It is highly unlikely to consider that a camera with a 50ke- FWC would still have 3.8 micron pixels. It is more likely that such a camera, if we assume it uses the same fabrication process as the ASI1600, would have 6-7 micron pixels. Here's the rub...those larger photodiodes are going to gather more light per unit time than smaller photodiodes. If we assumed that this hypothetical 50ke- FWC camera had 6 micron pixels, then those pixels will be gathering 2.5x as many photons per second as an ASI1600 pixel. Both cameras would saturate in exactly the same amount of time. (Larger pixels also tend to have more read noise, so you are not guaranteed to have more dynamic range with those bigger pixels either.) 

 

To me, this is the beauty of the 20x guideline. You don't have to think about all these nuances. It bypasses all of them, and gives you a SIMPLE means of determining the minimum necessary signal level to sufficiently swamp the read noise in each sub. If you choose to use longer exposures and potentially suffer the consequences to resolution or sub loss, that is another SUBJECTIVE decision, and I can't help you there. :p But the 20x guideline is a very simple, OBJECTIVE means of determining what you need at a minimum to acquire sufficient exposure in each sub to have good stacking efficiency. 


Edited by Jon Rista, 15 December 2016 - 06:39 PM.


#25 GeneralT001

GeneralT001

    Mercury-Atlas

  • *****
  • Posts: 2,751
  • Joined: 06 Feb 2012

Posted 15 December 2016 - 06:45 PM

A KAF-8300 has ~9e- read noise, which means you should aim for a 180e- background sky level. An ASI1600 has at most 3.5e- read noise at minimum gain, so you only need a 70e- background sky level. At unity gain, you only have 1.55e- read noise...so you only need  a 31e- background sky level. 

 

 

Hi Jon,

 

For clarification with what you wrote above. The ASI1600 has ~3.5e- read noise at 0 gain but IOT calculate its "ideal" DN would you not have to first divide the read noise by the gain (3.5e- / 5e-) and then multiply by 20. So 20 x (3.5e- / 5e-) = 20 x .7e- = 14e- DN vice the 70e- background sky level you mention above for 12bit?




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics