•

# Can not sufficiently swamp read noise w/3nm Astrodons

149 replies to this topic

### #51 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 12:14 AM

I think the additional noise from additional subs is more than what has been suggested.

Say 4 hours produces 800e of sky background.

Say RN = 2e

Say 30sec x 480 subs = 4hours

Noise = sqrt (800 +480Ă—2^2) = 52.1

If we go to 60sec x 240 subs = 4hours

Noise = 41.95

2min x 120 = 4hrs

Noise = 35.77

4min x 60 = 4hrs

Noise = 32

Using RN = 1.3e

480subs, 30 sec, noise = 40.1

240subs, 60 sec, noise = 34.7

120subs, 2 min, noise = 31.7

60subs, 4 min, noise = 30.0

If use 10sec subs for 4hrs N= 57

IMO what this shows is that to maximize SNR that you need to find your system's max allowable exposure time given the constraints noted in earlier post.

Edited by ks__observer, 28 July 2018 - 07:44 AM.

• StevenBellavia likes this

### #52 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 09:12 AM

Interestingly, if I did this correct:

Using 800e as sky background over bias per JR's reported value, and RN=2, in the above noise formula, assuming a 4 hour capture, the knee for optimum exposure time is about 2 minutes.

Interestingly, even if you change the RN value and background signal, the knee always ends up about 2 min.

https://www.dropbox....Sub N.xlsx?dl=0

• StevenBellavia likes this

### #53 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 01:11 PM

Interestingly, if I did this correct:

Using 800e as sky background over bias per JR's reported value, and RN=2, in the above noise formula, assuming a 4 hour capture, the knee for optimum exposure time is about 2 minutes.

Interestingly, even if you change the RN value and background signal, the knee always ends up about 2 min.

https://www.dropbox....Sub N.xlsx?dl=0

I think you misunderstood. The 800e- was the total integration, not the per-sub value. The per-sub value was around 20e- or so. That is with swamping the read noise by ~10xRN^2. If you use 20e- rather than 800e-, I think you will find that read noise matters more. Same would go for 10e-, etc.

### #54 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 01:28 PM

This is a total integration time graph.

Noise = sqrt (800 +NĂ—RN^2)

Even doubling sky background, even setting sky background to zero (say a 3nm Ha filter from a Bortle 1 zone), increasing read noise, decreasing read noise, you always wind up roughly with a knee around 120 read noise hits -- visually looking at the graph.

This would yield a 120 Rule for sub exposure time = Total Exposure Time / 120.

Most interestingly, it does not matter if it is Ha or Lum.

For both Lum and for Ha -- maybe Ha you want to tweak up a bit based on the curve -- the ballpark number I think is a 120 rule as a good guide.

So imaging for:

1hr, 60min, Exp = 60/120= 0.5 min
4hr, 240min, Exp = 240/120= 2.0 min

At these exposure times, for most people, you do not need to worry about being mount limited, and if shooting around unity you probably don't have to worry about being DR/saturation limited.

Edited by ks__observer, 28 July 2018 - 01:31 PM.

### #55 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 01:51 PM

This is a total integration time graph.

Noise = sqrt (800 +NĂ—RN^2)

Even doubling sky background, even setting sky background to zero (say a 3nm Ha filter from a Bortle 1 zone), increasing read noise, decreasing read noise, you always wind up roughly with a knee around 120 read noise hits -- visually looking at the graph.

This would yield a 120 Rule for sub exposure time = Total Exposure Time / 120.

Most interestingly, it does not matter if it is Ha or Lum.

For both Lum and for Ha -- maybe Ha you want to tweak up a bit based on the curve -- the ballpark number I think is a 120 rule as a good guide.

So imaging for:

1hr, 60min, Exp = 60/120= 0.5 min
4hr, 240min, Exp = 240/120= 2.0 min

At these exposure times, for most people, you do not need to worry about being mount limited, and if shooting around unity you probably don't have to worry about being DR/saturation limited.

If you are going by this statement:

`Using 800e as sky background over bias`

Then your calculations will be wrong. The per-sub signal over bias is 20e-, not 800e-. With 800e-, you would be swamping read noise squared by 200x. That is why read noise doesn't matter in your calculations.

You need to use 20e- as the signal over bias level, not 800e- over bias.

### #56 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 01:57 PM

Sky background is 800e over four hours.

N number of read noise hits over four hours.

Noise after 4 hours = sqrt(800 + N*RN^2).

### #57 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 02:01 PM

I am also not clear exactly what assumptions are being made in the model.  It is ASI1600 with certain settings, f6, 3nm, and given sky background - but what is the "signal" used to calculate SNR?  People often talk about "The SNR" - but it assumes a particular signal value.  Or are you just looking at total noise in a stack - and looking at equivalent noise?

The signal is always the background sky signal itself. Lets just take Veil as an example. You have distinct areas with on-band signal (i.e. Ha, OIII, SII), and distinct areas that are effectively empty sky (there may be very very faint signals there, but they are small enough that the background continuum signal will totally dominate). So with the 10xRN^2 guideline, you want an area of empty "background sky" to have a signal level that is 10x the read noise squared.

For an object like Elephant Trunk, or the region around it. I would measure my backgrond sky in one of the areas of dark dust. But, that being said...you don't really need to "figure this stuff out" on every single image. I think it is actually best to find an are of relatively empty sky, and figure out how much exposure you need in that region of sky. Where there are no narrow band signals, or few signals at all other than stars. Once you figure out what it takes to get sufficient background sky (and, again, it may be 10xRN^2 for you, it may be 3xRN^2, or something in between, once you figure out how long it takes to clip your stars to a degree you are unhappy with), then you have figured out what your exposures need to be. From that point on...just keep using the same settings.

In my case, for narrow band, 3 minute subs @ Gain 200 on an ASI1600 work pretty darn well. Depending on the object and the night (i.e. how good or bad the skies are), I swamp my read noise squared by 8-11x. It is not always exactly 10x, but that doesn't matter all that much. I am more concerned about consistency here in the long run. 8x is sufficient, 11x is excellent. By using the same settings every time, I can reuse calibration frames over and over, and I know that while I'll clip a few stars, I don't clip too many nor by too much.

My main point for all this stuff is that practical considerations dominate - and the challenge of getting a good set of 10m exposures is very different from 5m exposures - and there are additional benefits with the added dithering and sigma rejection in the larger number of exposures.  If it "hurts" at all to go longer - you probably shouldn't do it.  And if it doesn't "hurt" to go longer, and you can still get many frames - go ahead and do it.  And that is true regardless of any 5, 10, or 20 factor threshold considered "swamped" or not.

I totally agree here. There is a balance between long and short. I also agree that not everything is always about per-sub SNR. I myself am a fan or shorter or short-ish subs specifically because of the added benefits of dithering and sigma rejection with a larger number of subs.

I agree that what the actual swamp factor is, when push comes to shove, is not as important as the characteristic of the whole signal, in individual subs and the final integration. There are more concerns than just background SNR. Clipping brighter signals/stars is a concern. Rejecting outliers (trails and hot pixels and such) is another concern. I often see remnants of airplane or meteor or sattelite trails in stacks of only 20-30 subs. Even though they are long exposures, sometimes 20 subs or so is just not enough to fully reject unwanted trails.

I don't call 10xRN^2 a "rule" though, if I can help it. It is a guideline. Same as 3xRN^2 or 5xRN^2 are. It is a place to START, then you can go from there.

Edited by Jon Rista, 28 July 2018 - 02:05 PM.

### #58 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 02:15 PM

Sky background is 800e over four hours.

N number of read noise hits over four hours.

Noise after 4 hours = sqrt(800 + N*RN^2).

Read noise definitely matters, though. The knee won't be the same for any read noise, which is what you were saying your calculation implied.

If you have 8e- read noise, and stacked 40 subs, your read noise term alone would be 40*8^2, which is 2560e-. That HAS to change where the knee is, which means you have to use a longer exposure, and thus fewer subs, in order to achieve the same SNR with 8e- as with 2e-. If you had 5e- read noise, you would have 40*5^2 which is 1000e-.

N8e = SQRT(800 + 40*8^2) = SQRT(800 + 2560) = SQRT(3360) = 58e-

N5e = SQRT(800 + 40*5^2) = SQRT(800 + 1000) = SQRT(1800) = 42.4e-

You are missing something here, because the knee cannot be the same for all read noise. Your exposure has to change in order to swamp the read noise enough depending on what the read noise is.

Edited by Jon Rista, 28 July 2018 - 02:17 PM.

### #59 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 02:34 PM

https://www.dropbox....Sub N.xlsx?dl=0

Formula:

Noise = sqrt (800 +NĂ—RN^2)

See for yourself.

Play with the numbers.

### #60 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 02:44 PM

https://www.dropbox....Sub N.xlsx?dl=0

Formula:

Noise = sqrt (800 +NĂ—RN^2)

See for yourself.

Play with the numbers.

Ok, I see what you are doing.

I think you are misinterpreting your own graph. Or perhaps to put it another way, I think your graph is misleading. If I put in 8e-, with 12 minute exposures, I have 54e- read noise. While the knee did not move, the SNR is most definitely NOT the same at 8e- as at 2e-, for any exposure.

Try to normalize the SNR in the final stack. To do that you will need to add more exposure lengths above 12. When you do that, the knee will definitely move. By the time you get the same SNR at the knee with 8e- as with 2e-, you will find that the required exposure length is a good deal longer.

Edited by Jon Rista, 28 July 2018 - 02:46 PM.

### #61 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 02:54 PM

It should also be noted that the required exposure time would only scale directly with the read noise if the image scale and f-ratio were the same. That is a fairly specific use case. It is useful theoretically, but in reality, exposure time is going to be dependent on image scale and aperture/f-ratio as well.

So the knee in the graph is going to change a lot depending on your specific setup.

If we go with the assumption that different cameras are used on the same scope, then image scale would have to be taken into account. To be more specific, the flux per pixel would change as the camera was changed. So while read noise is definitely a factor, so is flux. Flux is also going to be dependent on the skies the imager has.

This is why the guideline is simply SwampFactor * Nread^2. There is no easy way to plot a simple graph that imagers can simply look at and determine their optimal exposure from. The only way to determine optimal exposure is to EXPERIMENT.

### #62 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 03:22 PM

Noise = sqrt (Shot Noise +NĂ—RN^2)

Shot Noise = Target + Sky + Dark-temp

F-ratio, pix size, and location will control Shot Noise factor.

You drag all the crap that you shoved into your car out at your dark site (or home if NB), you set up, and now you have X hours to image.

How do you know how many subs is too many, or how many subs are too few?

Answer: Simply look at the Noise chart.

Too many subs and the number of read-noise-hits increases the Noise too much and you start climbing the curve.

Too few subs and you are on the flat part of the curve, and with limited loss in SNR you can decrease exposure time to optimize other constraints.

Graphing the Noise curve gives a good sense of where the knee is to optimize number of subs to use so that the total read-noise-hits is optimized.

While every system will be different, they tend around 120+/- for the knee -- and this is true for Ha and Lum and various other settings (looking at the Ha curve maybe 90 is better -- longer subs).

I suggest playing with the numbers and deciding for yourself.

And yes, experiment with all the various other constraints.

Edited by ks__observer, 28 July 2018 - 05:19 PM.

### #63 cfosterstars

cfosterstars

Mercury-Atlas

• Posts: 2,774
• Joined: 05 Sep 2014
• Loc: Austin, Texas

Posted 28 July 2018 - 06:24 PM

I have been trying to follow this thread since the start and other just like it. For me, I just keep it simple: dont saturate too much and then take as long an exposure as I can given my mount. Then take as many as I possibly can. I know this sound simplistic and it is, but how much of this is just splitting hairs? I think I understand that this is about how to get the absolute most out of the time that you spend. I get that and especially if you are investing time in traveling to a dark site. But how much of a difference will it really make if you take 4 minute subs vs 20 minute subs with a low noise camera? If I have 30 hours of exposure, could you tell the difference? I am not being critical here. Dont get me wrong. But so far, I have basically been ignoring this level of detail. But should I start?

### #64 freestar8n

freestar8n

Vendor - MetaGuide

• Posts: 9,493
• Joined: 12 Oct 2007

Posted 28 July 2018 - 07:13 PM

I have been trying to follow this thread since the start and other just like it. For me, I just keep it simple: dont saturate too much and then take as long an exposure as I can given my mount. Then take as many as I possibly can. I know this sound simplistic and it is, but how much of this is just splitting hairs? I think I understand that this is about how to get the absolute most out of the time that you spend. I get that and especially if you are investing time in traveling to a dark site. But how much of a difference will it really make if you take 4 minute subs vs 20 minute subs with a low noise camera? If I have 30 hours of exposure, could you tell the difference? I am not being critical here. Dont get me wrong. But so far, I have basically been ignoring this level of detail. But should I start?

I think that's a good summary.  Longer is better except for saturation, possibly lower yield, and fewer frames to stack.  There is no well defined point where things are suddenly about as good as they can get - and other practical matters may prevent going that long in the first place.

And as soon as you say "get the absolute most out of the time that you spend" - that is very different from saying "how much time you need to achieve a given level of quality."  And I agree that the former is how most people operate - hence what matters is the difference in quality in a given total amount of exposure time.

Frank

• cfosterstars likes this

### #65 freestar8n

freestar8n

Vendor - MetaGuide

• Posts: 9,493
• Joined: 12 Oct 2007

Posted 28 July 2018 - 07:20 PM

The signal is always the background sky signal itself. Lets just take Veil as an example. You have distinct areas with on-band signal (i.e. Ha, OIII, SII), and distinct areas that are effectively empty sky (there may be very very faint signals there, but they are small enough that the background continuum signal will totally dominate). So with the 10xRN^2 guideline, you want an area of empty "background sky" to have a signal level that is 10x the read noise squared.

For an object like Elephant Trunk, or the region around it. I would measure my backgrond sky in one of the areas of dark dust. But, that being said...you don't really need to "figure this stuff out" on every single image. I think it is actually best to find an are of relatively empty sky, and figure out how much exposure you need in that region of sky. Where there are no narrow band signals, or few signals at all other than stars. Once you figure out what it takes to get sufficient background sky (and, again, it may be 10xRN^2 for you, it may be 3xRN^2, or something in between, once you figure out how long it takes to clip your stars to a degree you are unhappy with), then you have figured out what your exposures need to be. From that point on...just keep using the same settings.

To me this is a weird use of "SNR" because you are viewing the sky background as a signal and read noise as a noise term.  But the sky background signal behaves the same as dark current - and the signal itself is subtracted off leaving only the Poisson noise term as relevant.  I guess you are ignoring dark current.

So - you are really just saying the exposure should be long enough that all noise terms dominate read noise - and I wouldn't use the term SNR here.

SNR has primary meaning in terms of the signal being nebulosity and the noise being all the noise terms that obfuscate it.

I will try to make a tool that shows how I view this stuff - which is different from how it is normally presented.

Frank

### #66 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 08:08 PM

To me this is a weird use of "SNR" because you are viewing the sky background as a signal and read noise as a noise term.  But the sky background signal behaves the same as dark current - and the signal itself is subtracted off leaving only the Poisson noise term as relevant.  I guess you are ignoring dark current.

So - you are really just saying the exposure should be long enough that all noise terms dominate read noise - and I wouldn't use the term SNR here.

SNR has primary meaning in terms of the signal being nebulosity and the noise being all the noise terms that obfuscate it.

I will try to make a tool that shows how I view this stuff - which is different from how it is normally presented.

Frank

I am not really ignoring dark current, but with most of the cameras I use, dark current is so trivial by the time I've reached 10xRN^2 that I don't really care about it (i.e. it might be an electron or two).

To be more clear. The total shot noise in the LOWEST signal area of the image (counting ALL signal sources) should be 10xRN^2 for optimal results. It doesn't matter if that shot noise comes from dark current, background sky, or object signal, or any combination thereof. The key is that the total shot noise "swamps" the read noise. This isn't arbitrary, it is just mathematical.

If we assume we have 2e- read noise, then we would need 40e- signal to reach 10xRN^2 criteria. Now, we are measuring the WEAKEST signal in the image, regardless of where it may be. So your object signal, the signal you desire, should be even higher than this in other areas of the frame. But by measuring the weakest total signal (object+background sky+dark current), and basing your exposure criteria off of this, then you should have at least this good of performance throughout the entire frame, or better.

So, if we have 40e- background sky signal and 2e- read noise:

SNR = 40/SQRT(40 + 2^2) = 6.03:1

If we for the moment assume there is no read noise, then our background sky SNR would be:

SNR = 40/SQRT(40) = 6.33:1

If we calculate how good our SNR with read noise is, vs. how good a signal of pure shot noise would be, by following the 10xRN^2 criteria you will always achieve over 95% the SNR that you would if the camera had no read noise at all. Doesn't matter how many subs you stack, your "effective efficiency" or "stacking efficiency" would be 95% for the worst performing signal in the frame. Your total shot noise has a much greater effect on SNR than read noise.

Any area of the frame that does have object signal in it would perform BETTER than this, since you would have not only background sky and dark current, but also the added object signal. Exactly how much better depends, object signal could be zero or it could be many times that of the background sky. Regardless, by using 10xRN^2 you know that you have rendered the impact of read noise almost moot, so you don't really need to worry about it. Your performance is almost "ideal" (pure shot noise), and generally "optimal" from an SNR standpoint. (As I said before, I agree, there are other factors besides SNR to consider.)

Now, if you have 0 object signal in a given area of the frame, then your object SNR would be:

SNRobj = 0/SQRT(40 + 2^2) = 0:1

Now, lets say we have an area of the field with 80e- total signal. The object SNR for the pixels in that area would be:

SNRobj = 40/SQRT(80 + 2^2) = 4.36:1

If you had an area with 60e- total signal:

SNRobj = 20/SQRT(60 + 2^2) = 2.5:1

Yes, object SNR is ONLY the object signal divided by the square root of all the other signals and read noise combined. However, because we have swamped read noise, the object signal will hardly be affected by read noise:

SNRobjideal = 40/SQRT(80) = 4.47:1

SNRobjideal = 20/SQRT(60) = 2.58:1

This is why we talk about the 10xRN^2 criteria. The 40e- object signal is 98% as efficient as ideal results. The 20e- object signal is 97% as efficient as ideal results. No point in wasting time trying to do better than that, honestly. There may be a point to doing worse, but at lest you'll know where the upper limit is. (And, personally, I would consider 3xRN^2 to be the absolute lower limit, the bar-none, get this at the very least.)

If you can, or want, to achieve 10xRN^2, then it renders read noise effectively moot. Read noise has a minor impact to your SNR overall. No matter how many subs you stack. That is a useful thing to know, at the very least. Whether you always follow it, for every object, or not depends. Swamping read noise by less than 10xRN^2 will impact your SNR for a given integration. Again, SNR is not always the most important factor. There are other things to consider. And to normalize your SNR, you may only need to expose for slightly longer. But, at least you will know, IF you expose for Y with system X and achieve 10xRN^2, you'll achieve pretty darn optimal results.

### #67 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 28 July 2018 - 08:15 PM

Noise = sqrt (Shot Noise +NĂ—RN^2)

Shot Noise = Target + Sky + Dark-temp

F-ratio, pix size, and location will control Shot Noise factor.

You drag all the crap that you shoved into your car out at your dark site (or home if NB), you set up, and now you have X hours to image.

How do you know how many subs is too many, or how many subs are too few?

Answer: Simply look at the Noise chart.

Too many subs and the number of read-noise-hits increases the Noise too much and you start climbing the curve.

Too few subs and you are on the flat part of the curve, and with limited loss in SNR you can decrease exposure time to optimize other constraints.

Graphing the Noise curve gives a good sense of where the knee is to optimize number of subs to use so that the total read-noise-hits is optimized.

While every system will be different, they tend around 120+/- for the knee -- and this is true for Ha and Lum and various other settings (looking at the Ha curve maybe 90 is better -- longer subs).

I suggest playing with the numbers and deciding for yourself.

And yes, experiment with all the various other constraints.

Noise is only part of the equation. SNR considers signal and noise. The rate at which you achieve a given signal depends on a variety of factors. Just looking at a noise-only chart doesn't really tell you squat.

SNR8e = 800/SQRT(800 + 40*8^2) = 800/SQRT(800 + 2560) = 13.8:1

SNR2e = 800/SQRT(800 + 40*2^2) = 800/SQRT(800 + 4) = 28.21:1

These are very different SNRs. If you exposed for the same length of time with both cameras, one would perform significantly better than the other for a given total integration time.

Your chart isn't very useful, because it is just about noise, and only noise. You should be thinking about SNR. But even an SNR chart alone isn't going to tell you how long to expose at a dark site. You cannot determine exposure just with noise, or just with SNR. You need to know how fast each pixel builds up the signal. The 10xRN^2 criteria doesn't tell you how long to expose. It only tells you how much signal you need to get optimal results. It is up to you to EXPERIMENT and figure out how long it will take to get that much signal. Once you KNOW how long, then it is also up to you to figure out whether exposing that long is worth it or not.

### #68 cfosterstars

cfosterstars

Mercury-Atlas

• Posts: 2,774
• Joined: 05 Sep 2014
• Loc: Austin, Texas

Posted 28 July 2018 - 08:21 PM

OK.

Jon,

Can you explain then how you would go about doing this sort of optimization with a real world situation. Say I have an ASI1600MM-PRO at 200 gain and -15C for Ha and I am imaging the fireworks galaxy. What process would I go through to figure out what I would need to do to get to 10XRN^2?

I think you have one of these cameras. Can you do an example?

Thanks,

• psandelle likes this

### #69 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 28 July 2018 - 09:57 PM

Noise is only part of the equation. SNR considers signal and noise. The rate at which you achieve a given signal depends on a variety of factors. Just looking at a noise-only chart doesn't really tell you squat.

SNR8e = 800/SQRT(800 + 40*8^2) = 800/SQRT(800 + 2560) = 13.8:1

SNR2e = 800/SQRT(800 + 40*2^2) = 800/SQRT(800 + 4) = 28.21:1

These are very different SNRs. If you exposed for the same length of time with both cameras, one would perform significantly better than the other for a given total integration time.

Your chart isn't very useful, because it is just about noise, and only noise. You should be thinking about SNR. But even an SNR chart alone isn't going to tell you how long to expose at a dark site. You cannot determine exposure just with noise, or just with SNR. You need to know how fast each pixel builds up the signal. The 10xRN^2 criteria doesn't tell you how long to expose. It only tells you how much signal you need to get optimal results. It is up to you to EXPERIMENT and figure out how long it will take to get that much signal. Once you KNOW how long, then it is also up to you to figure out whether exposing that long is worth it or not.

The problem is for LP areas, fast scopes, and high gains, the 10*RN^2 formula leads to squat exposure times that result in a ridiculously large number of read-noise-hits that completely de-optimizes the SNR in your total integration and kills your precious SNR at the dimmest areas.

Noise is what matters.

SNR is all about the Noise in the denominator.

Follow the formula: Noise = sqrt (Shot Noise +NĂ—RN^2)

For NB, it leads to ridiculously long exposures when you can suffer many more read noise hits with more exposures, less exposure time, with minimal impact on total SNR -- and better results with more subs and less sub exposure time.

The SNR formula rules.

And 800 in the numerator above does not seem to make sense as it is not "target signal," so I think you have to just deal with the denominator: noise.

### #70 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 29 July 2018 - 01:24 AM

The problem is for LP areas, fast scopes, and high gains, the 10*RN^2 formula leads to squat exposure times that result in a ridiculously large number of read-noise-hits that completely de-optimizes the SNR in your total integration and kills your precious SNR at the dimmest areas.

Noise is what matters.

SNR is all about the Noise in the denominator.

Follow the formula: Noise = sqrt (Shot Noise +NĂ—RN^2)

For NB, it leads to ridiculously long exposures when you can suffer many more read noise hits with more exposures, less exposure time, with minimal impact on total SNR -- and better results with more subs and less sub exposure time.

The SNR formula rules.

And 800 in the numerator above does not seem to make sense as it is not "target signal," so I think you have to just deal with the denominator: noise.

You don't seem to understand this. If you use 10xRN^2, it doesn't matter what the exposure time is once you figure it out. You will have 95% stacking efficiency REGARDLESS. If you swamp by 10x, with 1 minute subs, 3 minute subs, 10 minute subs or 30 minute subs (whatever is required for your system), it doesn't matter. You've swamped the read noise. It isn't possible for 10xRN^2 to "deoptimize" the SNR. Completely false. Using 10xRN^2 GIVES you optimal SNR for a given amount of read noise.

Noise in isolation is not what matters. SNR is what matters. Noise on it's own doesn't tell you anything. Noise and signal are two sides of the same coin, inseparable traits of the same thing. It wouldn't matter if you had 500e- noise, if you had 50,000e- object signal, your SNR would be 100:1...phenomenal. Noise on it's own doesn't tell you squat.

Further, optimizing your image is about maximizing SNR as much as possible without detriment to parts of the image. If you are not clipping stars, then there is ZERO reason not to use longer exposures. To not use longer exposures would be wasteful when you aren't clipping anything.

With CMOS cameras, exposures are usually in the minutes. To complain that using 5 minute subs instead of 3 minute subs is ridiculous. On the flip side, to use 5 minute subs with a higher noise camera when 30 minute subs are NECESSARY, is just plain dumb. With CMOS cameras, the opposite problem is usually the bigger problem: You often end up needing exposures that are so short, you need to stack far too many of them to be reasonable (unless stacking huge amounts of subs is explicitly one of your goals, and in the case of lucky imaging, it is...but SNR is not your primary goal either.)

Short exposure with CMOS cameras are a byproduct of their low read noise. You CAN use shorter exposures. That doesn't change the fundamental fact that you still need to expose long enough. Your noise-only approach here is extremely misleading. You cannot use 2 minute subs with a camera that has 8e- read noise like you can with a camera that has 2e- read noise. Well, you could, but the SNR would be much lower with the 8e- read noise camera, unless some other factor was involved that balanced the playing field (i.e. much larger pixels, significantly larger aperture, something like that...but again, your noise-only approach accounts for none of that.)

Your "knee" is only stuck at 2e- because you are only accounting for noise, and not trying to normalize SNR. If you normalize the SNR for the two cameras, one with 8e- and one with 2e-, you will find that the knee most definitely does NOT stay at 2 minutes for the camera with 8e- read noise.

Plot yourself some SNR graphs. You will see what I am talking about.

Edited by Jon Rista, 29 July 2018 - 02:49 PM.

• psandelle and Shiraz like this

### #71 Jon Rista

Jon Rista

ISS

• Posts: 24,254
• Joined: 10 Jan 2014

Posted 29 July 2018 - 01:31 AM

OK.

Jon,

Can you explain then how you would go about doing this sort of optimization with a real world situation. Say I have an ASI1600MM-PRO at 200 gain and -15C for Ha and I am imaging the fireworks galaxy. What process would I go through to figure out what I would need to do to get to 10XRN^2?

I think you have one of these cameras. Can you do an example?

Thanks,

I think I mentioned it before. Find an empty area of the sky. No nebula, only stars if you can, during galaxy season would be ideal. Take a medium length exposure, with this camera under say orange zone skies, that might be 2 minutes. Measure the background sky level (this will probably be done in 16-bit, if say you were doing it in SGP). This will give you the background sky level in 16-bit DN.

From here, you want to convert back to electrons. The ASI1600 is a 12-bit camera, so first thing, divide your measurement by 16 to convert from 16-bit to 12-bit. If you measured say 1300 DN, then after conversion you would have 81.25 ADU. You now need to remove the bias offset. Lets say it's 50 ADU. That leaves you with 31.25 ADU.

Now you need to convert back to electrons. Lets say we were using Gain 200. That is 0.48e-/ADU, so we multiply our 31.25 by that, which gives us 15e-. This is our background sky signal in electrons.

At Gain 200, read noise is ~1.3e-. Square that, you have ~1.7e-. Divide your background sky signal, 15e-, by 1.7e- and you get 8.82x. You swamped your read noise, at Gain 200, with 2 minute subs, by 8.82x.

• Shiraz, cfosterstars and *Axel* like this

### #72 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 29 July 2018 - 02:24 AM

You don't seem to understand this. If you use 10xRN^2, it doesn't matter what the exposure time is once you figure it out. You will have 95% stacking efficiency REGARDLESS. If you swamp by 10x, with 1 minute subs, 3 minute subs, 10 minute subs or 30 minute subs (whatever is required for your system), it doesn't matter. You've swamped the read noise. It isn't possible for 10xRN^2 to "deoptimize" the SNR. Completely false. Using 10xRN^2 GIVES you optimal SNR for a given amount of read noise.

Noise in isolation is not what matters. SNR is what matters. Noise on it's own doesn't tell you anything. Noise and signal are two sides of the same coin, inseparable traits of the same thing. It wouldn't matter if you had 500e- noise, if you had 50,000e- object signal, your SNR would be 100:1...phenomenal. Noise on it's own doesn't tell you squat.

Further, optimizing your image is about minimizing SNR as much as possible without detriment to parts of the image. If you are not clipping stars, then there is ZERO reason not to use longer exposures. To not use longer exposures would be wasteful when you aren't clipping anything.

With CMOS cameras, exposures are usually in the minutes. To complain that using 5 minute subs instead of 3 minute subs is ridiculous. On the flip side, to use 5 minute subs with a higher noise camera when 30 minute subs are NECESSARY, is just plain dumb. With CMOS cameras, the opposite problem is usually the bigger problem: You often end up needing exposures that are so short, you need to stack far too many of them to be reasonable (unless stacking huge amounts of subs is explicitly one of your goals, and in the case of lucky imaging, it is...but SNR is not your primary goal either.)

Short exposure with CMOS cameras are a byproduct of their low read noise. You CAN use shorter exposures. That doesn't change the fundamental fact that you still need to expose long enough. Your noise-only approach here is extremely misleading. You cannot use 2 minute subs with a camera that has 8e- read noise like you can with a camera that has 2e- read noise. Well, you could, but the SNR would be much lower with the 8e- read noise camera, unless some other factor was involved that balanced the playing field (i.e. much larger pixels, significantly larger aperture, something like that...but again, your noise-only approach accounts for none of that.)

Your "knee" is only stuck at 2e- because you are only accounting for noise, and not trying to normalize SNR. If you normalize the SNR for the two cameras, one with 8e- and one with 2e-, you will find that the knee most definitely does NOT stay at 2 minutes for the camera with 8e- read noise.

Plot yourself some SNR graphs. You will see what I am talking about.

Basically everything you say is what I understood to be generally true up until the point in this thread when I decided to graph the stinkin noise formula.

That graph completely changed how I understand exposure time.

An astro image, as you probably know better than anyone on this planet, is controlled by maximizing SNR in the dimmest areas.

Reducing the total integration noise at the zero target signal points, the denominator, is the name of the game.

The total integration noise at the dimmest areas = sqrt(sky + N*RN^2).  All you need to do is optimize this equation.

Edited by ks__observer, 29 July 2018 - 02:55 AM.

### #73 freestar8n

freestar8n

Vendor - MetaGuide

• Posts: 9,493
• Joined: 12 Oct 2007

Posted 29 July 2018 - 05:29 AM

I am not crazy about the use of SNR in this context - and I have a different take on what's important when you do deep sky imaging - so I have started a separate thread here:

https://www.cloudyni...nteractive-app/

It is very different from the way this thread is going - but the two can coexist separately.  But I mainly view things in terms of the other thread.

Frank

### #74 Shiraz

Shiraz

Viking 1

• Posts: 617
• Joined: 15 Oct 2010
• Loc: South Australia

Posted 29 July 2018 - 08:23 AM

how much different would it be with 10m exposures vs. 5m exposures.

I am also not clear exactly what assumptions are being made in the model.  It is ASI1600 with certain settings, f6, 3nm, and given sky background - but what is the "signal" used to calculate SNR?  People often talk about "The SNR" - but it assumes a particular signal value.  Or are you just looking at total noise in a stack - and looking at equivalent noise?

Frank

The example below shows the difference between some real results with 5 minute and 10 minute subs, taken over the same total imaging time. There can be a lot of difference and if the subs are too short the image quality suffers.

the earlier SNR analysis is based on a nominal target signal - in a dim region of the scene. This is assumed to be at a level that is below the sky noise, so that signal noise does not dominate - I used 5 photons/m^2/arcsec^2/s in the analysis, but any other value that is well below the sky signal, would work just as well to illustrate the effect of sub length. The analysis includes dark current, but assumes that FPN is calibrated out.

cheers Ray

Edited by Shiraz, 29 July 2018 - 11:45 AM.

• Jon Rista likes this

### #75 ks__observer

ks__observer

Apollo

• Posts: 1,278
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 29 July 2018 - 09:48 AM

There seems to be a disconnect between the theory and results.

I can visually see an improvement in the 10 min sub image.

But if you look at Frank's app in his thread, if I have it correct, if you look at the blue line for 2e read, which will likely be similar to above, the sub exposure time vs. improved snr flatlines about 3 minutes -- similar to what I believe I was getting with my graph.

## Recent Topics

 Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics