Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

ASI1600mm-c Cheat Sheet - No math

This topic has been archived. This means that you cannot reply to this topic.
95 replies to this topic

#26 jpbutler

jpbutler

    Apollo

  • *****
  • Posts: 1,150
  • Joined: 05 Nov 2015

Posted 15 December 2016 - 08:11 PM

jon,

 

Why wouldn't this hypothetical 50ke- camera have a recommended exposure value different than 20x the read noise? This guideline doesn't care about the FWC. It is FWC agnostic. It only cares about the read noise. It works for any camera. This is because once you have sufficiently swamped the read noise, further exposure has greatly diminished and further diminishing value, in the face of mounting detriments. It doesn't matter if the FWC is 10k, 20k or 50k...if you have 1e- read noise, then you only need a background sky of 20e- to sufficiently swamp the read noise. Consider that a pure signal of 20e- affected only by it's own intrinsic shot noise has an SNR of 4.47:1, while a 20e- signal that also has 1e- read noise has an SNR of 4.58:1. This is a difference of 0.2dB. That is meaningless.

If you are not clipping anything once you reach the 20x point, then you could certainly continue to expose. However, the longer your exposures, the more blur they will experience due to seeing, tracking and environmental effects (i.e. wind). The longer your exposures, the more you risk when you have to toss a sub. Such risks can be mitigated by spending more money...and if you have a highly reliable $10k (or more expensive) mount, then you might have little issue with using longer exposures because you may never have to toss any. However at some point, you are going to start clipping information.

 

​I think the bottom line is: are stacking and increasing exposure time equivalent? Both are ways to increase SNR. But does increasing exposure time beyond the minimum required to exceed 20x read noise give you something else that stacking does not? 

 

The reason I ask this is because it seem that the fainter an object is, the longer you need to expose in order to capture an adequate amount of signal. I understand why you wouldn't want to go below 20x read noise. But are there instances where you need to exceed it in order to get enough signal? I belong to a club where the prevailing wisdom is to increase exposure time to "go deeper". Stacking above say 25 subs is considered a waste.



#27 GeneralT001

GeneralT001

    Mercury-Atlas

  • *****
  • Posts: 2,751
  • Joined: 06 Feb 2012

Posted 15 December 2016 - 08:46 PM

jon,

 

The reason I ask this is because it seem that the fainter an object is, the longer you need to expose in order to capture an adequate amount of signal. I understand why you wouldn't want to go below 20x read noise. But are there instances where you need to exceed it in order to get enough signal? I belong to a club where the prevailing wisdom is to increase exposure time to "go deeper". Stacking above say 25 subs is considered a waste.

Is my math right here? If I have a reasonable amount of LP then I may need to get like say 28hrs of integration time, so for LRGB I could do like L=4hrs + R=8hrs + G=8hrs + B=8hrs. Now if I am doing 30sec subs then I end up with:

 

L - 480 subs

R - 960 subs

G - 960 subs

B - 960 subs

 

So a grand total of 3,360 subs (less a bit for dithering)? Is this correct?



#28 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 10:33 PM

jon,

 

Why wouldn't this hypothetical 50ke- camera have a recommended exposure value different than 20x the read noise? This guideline doesn't care about the FWC. It is FWC agnostic. It only cares about the read noise. It works for any camera. This is because once you have sufficiently swamped the read noise, further exposure has greatly diminished and further diminishing value, in the face of mounting detriments. It doesn't matter if the FWC is 10k, 20k or 50k...if you have 1e- read noise, then you only need a background sky of 20e- to sufficiently swamp the read noise. Consider that a pure signal of 20e- affected only by it's own intrinsic shot noise has an SNR of 4.47:1, while a 20e- signal that also has 1e- read noise has an SNR of 4.58:1. This is a difference of 0.2dB. That is meaningless.

If you are not clipping anything once you reach the 20x point, then you could certainly continue to expose. However, the longer your exposures, the more blur they will experience due to seeing, tracking and environmental effects (i.e. wind). The longer your exposures, the more you risk when you have to toss a sub. Such risks can be mitigated by spending more money...and if you have a highly reliable $10k (or more expensive) mount, then you might have little issue with using longer exposures because you may never have to toss any. However at some point, you are going to start clipping information.

 

​I think the bottom line is: are stacking and increasing exposure time equivalent? Both are ways to increase SNR. But does increasing exposure time beyond the minimum required to exceed 20x read noise give you something else that stacking does not? 

 

The reason I ask this is because it seem that the fainter an object is, the longer you need to expose in order to capture an adequate amount of signal. I understand why you wouldn't want to go below 20x read noise. But are there instances where you need to exceed it in order to get enough signal? I belong to a club where the prevailing wisdom is to increase exposure time to "go deeper". Stacking above say 25 subs is considered a waste.

Once you have sufficiently swamped the read noise, for the most part, yes. Stacking and increasing exposure are effectively equivalent. It boils down to a matter of stacking efficiency. You might have 93% stacking efficiency if you follow the 20x guideline. If you expose for longer, say twice as long and stack half as many subs, you might get up to 95% stacking efficiency. That is, of course, unless you start losing subs for any reason. Then, those half-length subs are suddenly giving you an advantage! Lose just one of those double-length subs, and you are now at a signal deficit vs. the single-length subs (even if you also lost one of the single-length subs, too! :p

 

However that is also not really what I've been talking about. I've been talking about the difference between using longer exposures at a dark site, vs. shorter exposures at a light polluted site. The point is, if you expose long enough (whatever that may be for your camera, and the 20x guideline is just a means of figuring out what is long enough), then the total shot noise from a dark site will be the same as the shot noise from the polluted site. If you expose to swamp the read noise the same, then the total shot noise from either site will be the same. For the total shot noise to be the same, then the signals have to be the same, which also means the histograms have to be the same.

 

Regarding going deeper. The only reason you have to expose for "long" in the first place is because of read noise. If we removed read noise from the equation, then we would have shot noise limited subs. Dark current, skyfog, object signals. These all grow with time. So, 3600 seconds, 1200 seconds, 300 seconds, 100 milliseconds. It wouldn't matter how long your exposure was. You could stack 50000x100ms subs, or expose for 5000 seconds in a single sub. If you have a photon flux of 1e-/min, then you are going to end up with the same 83e- signal regardless. If you have 0.01e-/s dark current, you would end up with 50e- dark current in the end, regardless. You would end up with an SNR of 83/SQRT(83+50), or 7.2:1, regardless. 

 

The only reason to expose longer to go deeper is because of read noise. If you have a lot, then sure, you probably need to expose longer...however, the 20x rule will tell you that. If you have very little read noise, then you don't need to expose as long, and again, the 20x rule will tell you that. It's just a tool, though. All it tells you is how much signal you need to sufficiently swamp the read noise. It doesn't have a perfectly linear relationship with different read noise levels. It actually might under-estimate a bit for cameras that have lower read noise, and over-estimate a bit for cameras that have higher read noise. At least that has been my experience. It's just a tool to give you an idea of what signal you need, and from that it just takes a bit of testing to figure out how long you need to expose. When you compare similar integration times, the differences in final SNR are usually quite small. The differences are usually only a few percent, and in my experience so far with the ASI1600, I am usually well into the 90% stacking efficiency range by using the 20x rule. At low gain, I've exposed for too long in a few cases, and ended up swamping the read noise by over 100x (around 160x, IIRC) in only 60 seconds of L exposure. :p I had around a 560e- mean background sky signal vs. 3.5e- read noise in my Andromeda L subs. I also clipped over 60 stars...so 60 seconds was clearly too long!

 

---

 

I will say this. At some point, a matter of practicality comes into play. I am always measuring how large my signals are, even while I am imaging. In the case of the Andromeda subs, I decided to stick with what I'd started with, as I did not want to end up stacking a thousand 5-10 second subs. I can't help anyone with where the subjective cutoff is. Personally, with this camera, I aim for somewhere between 200-300 subs, as I feel that's the sweet spot. It recovers enough bits to give me good precision, high quality data...but not so much that I start running into any real problems with FPN (assuming I dithered properly). Not everyone wants to stack 200+ subs. Some people don't even  want to stack 100 subs. Those are personal decisions, and have little to do with what might be statistically relevant.

 

If you still PREFER to stack only 25 subs, then more power to you. With a 16-bit camera and very long exposures, that should be fine. However that doesn't necessarily always work, statistically, for every camera. Using 30 minute subs with the ASI1600 runs into some other practicality issues. Amp glow can get quite significant with subs much longer than 10 minutes, and the additional noise can be somewhat rabid at 20 minutes and beyond. You are probably going to lose more than you'll gain by using 30 minute NB subs with an ASI1600. Plus, stacking only 25 subs with 12-bit data ain't going to do much good for any posterization if you encounter any. You need to stack a good deal more than that to smooth out the quantization error from lower gain settings and the 12-bit ADC.



#29 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 10:48 PM

 

jon,

 

The reason I ask this is because it seem that the fainter an object is, the longer you need to expose in order to capture an adequate amount of signal. I understand why you wouldn't want to go below 20x read noise. But are there instances where you need to exceed it in order to get enough signal? I belong to a club where the prevailing wisdom is to increase exposure time to "go deeper". Stacking above say 25 subs is considered a waste.

Is my math right here? If I have a reasonable amount of LP then I may need to get like say 28hrs of integration time, so for LRGB I could do like L=4hrs + R=8hrs + G=8hrs + B=8hrs. Now if I am doing 30sec subs then I end up with:

 

L - 480 subs

R - 960 subs

G - 960 subs

B - 960 subs

 

So a grand total of 3,360 subs (less a bit for dithering)? Is this correct?

 

You should be able to increase exposure for the RGB filters. Additionally, unless you are chasing some very faint color, you don't need more exposure in RGB than in L. You can usually get away with significantly less.

 

The idea is to always get your background sky signal to the same level, regardless of what filter you are using or where you are imaging from. A color filter is only going to pass at most about 1/3 of the light as an L filter. So, you should be able to expose about 3x longer. Not every filter is exactly 1/3 the bandpass of the L filter, and in general the flux of each color will differ a bit. In the case of AstroDon filters, I have found that I can expose G and B for 2-2.5x longer than L, and R 3x longer than L. 

 

Assuming the filters you are using are similar, then you should be doubling the exposure lengths for your RGB filters. Maybe even more than double. However, you might only need about 1 hour of color data per channel. So, in your case, instead of 960  subs each for RGB, you would only need 60 each. All this stuff about posterization and such doesn't really apply to the RGB channels either. You can use much heavier noise reduction with the RGB channels, and you can even apply a small amount of blur with either a tool like MMT, or using the Convolution tool in PI. This will smooth out the data considerably in the RGB channels, effectively rendering all the concerns about noise and posterization moot.

 

What matters is your L channel. If you want to get 28 hours of integration time, then you are going to have to change something. :p The simplest solution is to just drop gain to the lowest setting, and figure out how long you can expose without overly clipping your stars. I don't know exactly how much LP you have...but IIRC from prior conversations, it was an orange zone? That is better LP than I have, and I can't really use much longer than about 60 second L exposures with my ASI1600. I'm around a red/white zone border, which is probably about 1.5mag/sq" brighter than the middle of an orange zone. You might be able to get L exposures that are 3x, maybe even 4x longer than me. I would experiment with 2-3 minute L subs, and see how many stars you are clipping. If you just see a light speckling of white pixels in an unstretched sub, then you are fine. If you see a lot of speckles and are starting to see white circles instead of just a smattering of pixels, then you are probably exposing for too long. 

 

However, considering that 28 hours is 1680 minutes...you really want to see if you can use at least 2 minute subs (which would reduce your sub count to 840), and if you can use 3 minute or longer subs, that would be better, reducing your sub count to 560 or less. Now, 28 hours is a tall order. :p It takes a LOT of dedication to get that much exposure on a single object. My guess is you might get 10-15, and you'll probably have 200-300 2-3 minute subs. I think 200-300 subs is a good place to be with this camera...more than that, and you might not actually benefit as FPN will start limiting you unless you are dithering a LOT (and the more frequently you dither, the longer it will take you to acquire 10, 15, 30 hours of data.)



#30 GeneralT001

GeneralT001

    Mercury-Atlas

  • *****
  • Posts: 2,751
  • Joined: 06 Feb 2012

Posted 15 December 2016 - 10:57 PM

 

 

jon,

 

The reason I ask this is because it seem that the fainter an object is, the longer you need to expose in order to capture an adequate amount of signal. I understand why you wouldn't want to go below 20x read noise. But are there instances where you need to exceed it in order to get enough signal? I belong to a club where the prevailing wisdom is to increase exposure time to "go deeper". Stacking above say 25 subs is considered a waste.

Is my math right here? If I have a reasonable amount of LP then I may need to get like say 28hrs of integration time, so for LRGB I could do like L=4hrs + R=8hrs + G=8hrs + B=8hrs. Now if I am doing 30sec subs then I end up with:

 

L - 480 subs

R - 960 subs

G - 960 subs

B - 960 subs

 

So a grand total of 3,360 subs (less a bit for dithering)? Is this correct?

 

You should be able to increase exposure for the RGB filters. Additionally, unless you are chasing some very faint color, you don't need more exposure in RGB than in L. You can usually get away with significantly less.

 

The idea is to always get your background sky signal to the same level, regardless of what filter you are using or where you are imaging from. A color filter is only going to pass at most about 1/3 of the light as an L filter. So, you should be able to expose about 3x longer. Not every filter is exactly 1/3 the bandpass of the L filter, and in general the flux of each color will differ a bit. In the case of AstroDon filters, I have found that I can expose G and B for 2-2.5x longer than L, and R 3x longer than L. 

 

Assuming the filters you are using are similar, then you should be doubling the exposure lengths for your RGB filters. Maybe even more than double. However, you might only need about 1 hour of color data per channel. So, in your case, instead of 960  subs each for RGB, you would only need 60 each. All this stuff about posterization and such doesn't really apply to the RGB channels either. You can use much heavier noise reduction with the RGB channels, and you can even apply a small amount of blur with either a tool like MMT, or using the Convolution tool in PI. This will smooth out the data considerably in the RGB channels, effectively rendering all the concerns about noise and posterization moot.

 

What matters is your L channel. If you want to get 28 hours of integration time, then you are going to have to change something. :p The simplest solution is to just drop gain to the lowest setting, and figure out how long you can expose without overly clipping your stars. I don't know exactly how much LP you have...but IIRC from prior conversations, it was an orange zone? That is better LP than I have, and I can't really use much longer than about 60 second L exposures with my ASI1600. I'm around a red/white zone border, which is probably about 1.5mag/sq" brighter than the middle of an orange zone. You might be able to get L exposures that are 3x, maybe even 4x longer than me. I would experiment with 2-3 minute L subs, and see how many stars you are clipping. If you just see a light speckling of white pixels in an unstretched sub, then you are fine. If you see a lot of speckles and are starting to see white circles instead of just a smattering of pixels, then you are probably exposing for too long. 

 

However, considering that 28 hours is 1680 minutes...you really want to see if you can use at least 2 minute subs (which would reduce your sub count to 840), and if you can use 3 minute or longer subs, that would be better, reducing your sub count to 560 or less. Now, 28 hours is a tall order. :p It takes a LOT of dedication to get that much exposure on a single object. My guess is you might get 10-15, and you'll probably have 200-300 2-3 minute subs. I think 200-300 subs is a good place to be with this camera...more than that, and you might not actually benefit as FPN will start limiting you unless you are dithering a LOT (and the more frequently you dither, the longer it will take you to acquire 10, 15, 30 hours of data.)

 

Thanks Jon,

 

From the few trials I have been able to do so far (wx issues) my exposure times for an ideal DN are very similar to yours - 30sec for L and 60 sec for RGB - mind you that testing was under an almost full moom, so those numbers should be able to go up abit.

 

Appreciate the info above.

 

Just to confirm though, with the ASI1600,  at 0 gain you should be aiming for a DN of 20 x (3.5e- / 5e-) = 14 12bit or 224 16bit. This is correct?



#31 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 15 December 2016 - 11:05 PM

Just to confirm though, with the ASI1600,  at 0 gain you should be aiming for a DN of 20 x (3.5e- / 5e-) = 14 12bit or 224 16bit. This is correct?


Well, you need a bit more than that. Something from one of my original formulas was lost somewhere along the line. You have to account for the bias offset as well. If you just want to be able to measure the background sky in a sub loaded into say SGP, you need to use this formula:
 

MinDN16 = (((ReadNoise * 20) / Gain) + BiasOffset) * 2^16/2^Bits

WHERE:

ReadNoise is the read noise in e-
Gain is the gain in e-/ADU
BiasOffset is the ADC offset in ADU
Bits is the bit depth of the camera

So, for the ASI1600 @ Gain 0 with 3.5e- read noise and 4.88e-/ADU at 12-bit:
 

MinDN16 = (((3.5 * 20) / 4.88) + 10) * 16 = ((70 / 4.88) + 10) * 16 = (14.4 + 10) * 16 = 25 * 16 = 400

You would need a minimum 400 DN 16-bit background sky @ Gain 0 with Offset 10. I guess this means that technically speaking, the cheat sheet in the original post is a bit off...


Edited by Jon Rista, 15 December 2016 - 11:10 PM.


#32 GeneralT001

GeneralT001

    Mercury-Atlas

  • *****
  • Posts: 2,751
  • Joined: 06 Feb 2012

Posted 15 December 2016 - 11:12 PM

 

Just to confirm though, with the ASI1600,  at 0 gain you should be aiming for a DN of 20 x (3.5e- / 5e-) = 14 12bit or 224 16bit. This is correct?


Well, you need a bit more than that. Something from one of my original formulas was lost somewhere along the line. You have to account for the bias offset as well. If you just want to be able to measure the background sky in a sub loaded into say SGP, you need to use this formula:
 

MinDN16 = (((ReadNoise * 20) / Gain) + BiasOffset) * 2^16/2^Bits

WHERE:

ReadNoise is the read noise in e-
Gain is the gain in e-/ADU
BiasOffset is the ADC offset in ADU
Bits is the bit depth of the camera

So, for the ASI1600 @ Gain 0 with 3.5e- read noise and 4.88e-/ADU at 12-bit:
 

MinDN16 = (((3.5 * 20) / 4.88) + 10) * 16 = ((70 / 4.88) + 10) * 16 = (14.4 + 10) * 16 = 25 * 16 = 400

You would need a minimum 400 DN 16-bit background sky @ Gain 0 with Offset 10.

 

Yes!, that is what I have been missing. This is really good as it will allow a longer exposure....maybe my skies are a bit darker than yours :band: 

 

Thanks, slowly - ever soooo.... slooowwwllllyyyyyy.... I will get this cemented, of course by then it will be time for a new camera!! :waytogo: 



#33 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 16 December 2016 - 12:53 PM

Tim, you could probably flag it to have a mod copy your latest post over your original post. You might want to throw Gain 200 into the list as well:

 

Gain 200 Offset 50: 1690 ADU

 

 

There are also "compromise" levels for the higher gain settings. Sometimes it isn't aways possible to swamp the read noise by 20x if you are working with some bright objects. Bubble nebula, for example, can start to clip if you use too-long exposures at Gain 200 and up. You could replace the constant 20 in my formula with 15 or 10 to generate two different compromise levels for bright objects. Not sure how you might want to factor that into your cheat sheet, but I wanted to note that sometimes you might have to do that. ;) Depends a lot on the f-ratio and image scale, though.


Edited by Jon Rista, 16 December 2016 - 01:05 PM.


#34 TimN

TimN

    Skylab

  • *****
  • Moderators
  • topic starter
  • Posts: 4,284
  • Joined: 20 Apr 2008

Posted 16 December 2016 - 02:48 PM

Thanks Jon, I took your advice and had my recent post moved to replace the original cheat sheet. For those of you reading this the original cheat sheet didn't include bias offsets. That has been corrected and the new calculations have now replaced the originals on my first post.



#35 jpbutler

jpbutler

    Apollo

  • *****
  • Posts: 1,150
  • Joined: 05 Nov 2015

Posted 16 December 2016 - 02:49 PM

Thanks for doing this Tim.

And I apologize for partially hijacking your thread.

 

John



#36 Seanem44

Seanem44

    Vanguard

  • *****
  • Posts: 2,293
  • Joined: 22 Sep 2011

Posted 10 April 2017 - 09:28 PM

Holy hell... I'm starting to question making the jump to the 1600 from DSLR.  The more I read the more confused I get.  I need a 1600 for Dummies book.



#37 glend

glend

    Vanguard

  • *****
  • Posts: 2,008
  • Joined: 04 Feb 2014

Posted 10 April 2017 - 09:33 PM

Holy hell... I'm starting to question making the jump to the 1600 from DSLR.  The more I read the more confused I get.  I need a 1600 for Dummies book.

Just leave it at Unity and get on with imaging.



#38 Seanem44

Seanem44

    Vanguard

  • *****
  • Posts: 2,293
  • Joined: 22 Sep 2011

Posted 10 April 2017 - 09:39 PM

 

Holy hell... I'm starting to question making the jump to the 1600 from DSLR.  The more I read the more confused I get.  I need a 1600 for Dummies book.

Just leave it at Unity and get on with imaging.

 

Check.  139/21 right?  Anyhow, helpful straight forward advice.  Thanks!



#39 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 10 April 2017 - 10:50 PM

 

 

Holy hell... I'm starting to question making the jump to the 1600 from DSLR.  The more I read the more confused I get.  I need a 1600 for Dummies book.

Just leave it at Unity and get on with imaging.

 

Check.  139/21 right?  Anyhow, helpful straight forward advice.  Thanks!

 

You are starting with NB filters, right? Unity is a great place to start. You don't necessarily have to worry about being terribly exact. Expose until you are measuring 700-900 ADU in the background sky, remember that exposure length, and stick with it for everything you do with NB filters. The key is sub count and integration time. You want about 80-100 subs for clean, smooth results, and more is certainly not a problem from a statistical standpoint. Then, just get as many hours of integration as you can. I always aim for 6 hours per NB channel. I usually get 2-3. My images aren't as clean as I would like, but, I'm happy with them regardless. The NB data is crisp, detailed, and high resolution compared to my light polluted DSLR data. ;) And a breeze to process. 



#40 Seanem44

Seanem44

    Vanguard

  • *****
  • Posts: 2,293
  • Joined: 22 Sep 2011

Posted 11 April 2017 - 04:51 AM




Holy hell... I'm starting to question making the jump to the 1600 from DSLR. The more I read the more confused I get. I need a 1600 for Dummies book.

Just leave it at Unity and get on with imaging.
Check. 139/21 right? Anyhow, helpful straight forward advice. Thanks!
You are starting with NB filters, right? Unity is a great place to start. You don't necessarily have to worry about being terribly exact. Expose until you are measuring 700-900 ADU in the background sky, remember that exposure length, and stick with it for everything you do with NB filters. The key is sub count and integration time. You want about 80-100 subs for clean, smooth results, and more is certainly not a problem from a statistical standpoint. Then, just get as many hours of integration as you can. I always aim for 6 hours per NB channel. I usually get 2-3. My images aren't as clean as I would like, but, I'm happy with them regardless. The NB data is crisp, detailed, and high resolution compared to my light polluted DSLR data. ;) And a breeze to process.

I assume SGP automatically counts this?

#41 TimN

TimN

    Skylab

  • *****
  • Moderators
  • topic starter
  • Posts: 4,284
  • Joined: 20 Apr 2008

Posted 11 April 2017 - 07:32 AM

SGP automatically calculates the median ADU for you. Its with the rest of the statistics on the upper left of the screen.



#42 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 11 April 2017 - 03:00 PM

 

 

 

 

Holy hell... I'm starting to question making the jump to the 1600 from DSLR. The more I read the more confused I get. I need a 1600 for Dummies book.

Just leave it at Unity and get on with imaging.
Check. 139/21 right? Anyhow, helpful straight forward advice. Thanks!
You are starting with NB filters, right? Unity is a great place to start. You don't necessarily have to worry about being terribly exact. Expose until you are measuring 700-900 ADU in the background sky, remember that exposure length, and stick with it for everything you do with NB filters. The key is sub count and integration time. You want about 80-100 subs for clean, smooth results, and more is certainly not a problem from a statistical standpoint. Then, just get as many hours of integration as you can. I always aim for 6 hours per NB channel. I usually get 2-3. My images aren't as clean as I would like, but, I'm happy with them regardless. The NB data is crisp, detailed, and high resolution compared to my light polluted DSLR data. wink.gif And a breeze to process.

I assume SGP automatically counts this?

 

SGP allows you to measure the image, and it also has a basic statistics panel that shows things like the image mean and median. The simplest approach is just to expose until your median is 700-900 ADU. If you have some very bright regions and some very dark regions of the image, you might just want to spot measure the dark areas, and make sure they are 700-900 ADU.

 

As an added tip, earlier in the evening, you might have some bit of twilight left, and more LP. Earlier in the evening you would want your background sky ADU to be closer to 900. As light levels peter off throughout the night, you would expect those levels to lower. As cars get off the road, as people go to sleep, LP levels will usually drop (may not always be the case everywhere, but I think in general this is the case). So if you measure your subs after midnight, you might be closer to 700 ADU. This is not unusual, and it won't hurt anything...but, while you are figuring out what the best exposure length is for your back yard, or driveway, or wherever you will normally be imaging from, you should be aware of these potential changes. 

 

Once you figure out what exposure length gives you sufficient signal, you should stick with it. If you don't vary your exposure length for a given type of filter (NB or LRGB), then you can create one master dark, and just keep reusing it. Makes things a lot easier. You also don't need to overthink each nights imaging session...once you know the right exposure for your skies, you just use that exposure, always.



#43 FiremanDan

FiremanDan

    Aurora

  • *****
  • Posts: 4,983
  • Joined: 11 Apr 2014

Posted 17 April 2017 - 01:55 PM

Jon do you have a basic ADU goal for each gain setting? Similar to what Tim started the post with? 

His numbers look pretty good to me, but you have a much better grasp on this stuff than I do. 

I think I was doing 500 (at least) for my 0 gain, but I actually end up more around the 900-1200. 
That's at 30s lum. 10s will get me down to 500 (all of this per SGP/16-bit) but I am not dealing with thousands of 10s subs... 



#44 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 17 April 2017 - 10:52 PM

I used the math to determine where to start, and I used that as my initial goals. To be honest, I don't know if my experimentations will really be that helpful to you, as I'm always messing wtih things a little bit here or there to learn more about this camera, about mono imaging, and about how to achieve my own goals. I try to be consistent with my recommendations here on the forum, however I have adjusted some of my recommendations given some of my experiences. In particular, i generally swamp the read noise a LOT at Gain 0 with LRGB, as I'v found it helps with FPN. But for the most part, I don't really share all teh details of my own experimentations...would probably just confuse people. 

 

I have experimented a bit, though, to see how varying sub lengths affect my results. I don't usually use very long exposures, however my NB sub have varied between 90-180s Gain 200 as I've tried different approaches. I really like the ultra faint stuff, and like to go as deep as I can. So I am usually opting for 120-180 second NB subs at Gain 200 these days. I clip some stars, but I can get really deep results in much less time than I've usually seen people need with higher noise CCD cameras. 

 

For LRGB, I have no problem at all swamping the noise. It's almost too easy. However I have found large discrepancies between L, GB and R filters, and I've been using different sub lengths for the moment to compensate. I guess an alternative would be to just get more R subs, which is something I've considered, however I simply haven't had any clear sky to experiment more. I would much prefer to use the same exposure lengths, as beyond about a minute, wind tends to cause problems with eccenticity and just outright ruining subs. I would love to stick to 60s subs for all channels, and use different gain settings to manage the read noise in each channel. However I cannot automate that with any existing software, so i've stuck with 60s L, 120s GB and 240s R, just to maintain exposure parity for each sub (same background sky level). In my most recent attemtp to do some LRGB, I went with two R subs per subsequence (LLRGBRLL), and used 240s exposures for R...however my latest plan is to drop R back to 120s and get 2 or 4 subs per sub sequence. I haven't had any clear sky time to try that. 

 

Keep in mind, most of this is just me trying to optimize things for my own purposes with my scope and my skies. I don't know how useful any of this would be to other people. Too many variations in sky brightness, scope aperture, f-ratio, etc. all of which will affect each individuals results. Thats why I generally espouse the 20x rule or the 3xRN^2 rule. With those basic formulas, people can figure out for themselves how long they should/need to expose for their telescopes and skies. 



#45 ciraxis

ciraxis

    Mariner 2

  • -----
  • Posts: 244
  • Joined: 12 Feb 2014

Posted 19 April 2017 - 08:40 AM

I need to start keeping notes on these threads.  

 

Thank you guys for being so helpful



#46 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 19 April 2017 - 02:20 PM

There is a follow link at the top. That way, you can keep track of threads you like (you can find yuor followed threads in your user profile area), and get notifications when they are updated. 



#47 f430

f430

    Viking 1

  • *****
  • Posts: 893
  • Joined: 29 Sep 2015

Posted 19 April 2017 - 04:19 PM

I'm trying to follow along with this, but I'm just new to this a even newer to SGP. So, in the Image Statistics box at the upper left in SGP, am I correct in assuming that the numbers following the words  Mean; Median; Minimum; Maximum; Std Deviation; are all in ADU?

 

And what does DN mean?

 

Thanks, John



#48 Coolwataz

Coolwataz

    Vostok 1

  • *****
  • Posts: 124
  • Joined: 18 May 2016

Posted 20 April 2017 - 08:01 AM

I'm trying to follow along with this, but I'm just new to this a even newer to SGP. So, in the Image Statistics box at the upper left in SGP, am I correct in assuming that the numbers following the words  Mean; Median; Minimum; Maximum; Std Deviation; are all in ADU?

 

And what does DN mean?

 

Thanks, John

Read this post: https://www.cloudyni...-exposed-right/

 

Jon and others explain it all and its application.



#49 Midnight Dan

Midnight Dan

    James Webb Space Telescope

  • *****
  • Posts: 15,914
  • Joined: 23 Jan 2008

Posted 20 April 2017 - 10:07 AM

I'm trying to follow along with this, but I'm just new to this a even newer to SGP. So, in the Image Statistics box at the upper left in SGP, am I correct in assuming that the numbers following the words  Mean; Median; Minimum; Maximum; Std Deviation; are all in ADU?

 

And what does DN mean?

 

Thanks, John

It's not in ADU unless you have a 16-bit camera.  The numbers in SGP are translated into 16-bit space, so if you have a 12-bit or 14-bit camera (most DSLRs and CMOS sensors), then the ADU output of the camera will be multiplied by the appropriate conversion factor to get to 16-bit space for display in SGP.

 

DN just stands for "Digital Number".  It's a kind of generic term for a number that's been converted to a specific digital bit-space and doesn't have any other identifying name like ADU.  It is sometimes followed by a number to indicate the bit-space, for example, DN16 for a 16-bit digital number.

 

-Dan



#50 f430

f430

    Viking 1

  • *****
  • Posts: 893
  • Joined: 29 Sep 2015

Posted 20 April 2017 - 10:29 AM

Okay, I've got it!

Thanks, John




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics