Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Determining optimal exposure time

  • Please log in to reply
23 replies to this topic

#1 Craig_

Craig_

    Lift Off

  • -----
  • topic starter
  • Posts: 23
  • Joined: 15 May 2020

Posted 30 June 2020 - 10:44 PM

I've jumped into this hobby only recently, and whilst I have managed to get out a few times for imaging, typically I have not had a lot of time either due to cloud blowing in, or technical & user issues delaying commencement of imaging. Anyway, after an imaging session last night I am left wondering what the best way to determine an optimal exposure is, particularly when shooting under changing conditions (eg sometimes I will be in a dark sky site with little to no moon, other times in Sydney with high levels of LP and potentially moonlight as well.) Over various imaging sessions I have been testing out different exposure times but am still scratching my head a bit at the best way to determine that a given exposure time (assuming shooting at unity gain) is 'best' for the conditions.

 

For example, a couple of weeks back I briefly imaged Carina Nebula (around 16m only, due to cloud blowing in) using 240" subs, no filter. Last night I imaged the same target again, from the same location but with a lot more moonlight, however this time I used the Optolong L-Enhance filter and dropped to 180s subs. I imaged the target for 90 minutes in total. The individual subs, and the final stack, show a lot less of the nebula before processing - although during PP, one can see that a lot more data was captured. I suspect that my subs were too short last night, but can't say for sure. I will show what I mean with some screenshots of the unprocessed subs below, from both last night and a couple of weeks back.

 

 

Here is a screenshot of a single untouched sub taken last night. ASI533MC Pro, SkyWatcher Esprit 80, 180", Unity gain, Optolong L-Enhance, ~70% moon, Sydney LP skies. 

 

6gpxU12.jpg

 

Here is a screenshot of a single untouched sub taken about two weeks ago. ASI533MC Pro, SkyWatcher Esprit 80, 240", Unity gain, no filter, no moon, Sydney LP skies. 

 

Jn5bBBU.jpg

 

And one more, the same night and settings as the above image except a 180" exposure:

 

4bP904l.jpg

 

 

Do any of these look correctly exposed? Clearly there was a significant loss of light using the Optolong, which is fine, it's all a learning process for me. Does it matter if you are shooting shorter subs but stacking more of them? (eg is 10* 3 minutes = 30 minutes the same as 6* 5 minutes = 30 minutes from a data acquisition perspective?)

 

 

If we then look at the stacked files, after stacking in DSS using identical settings, but no further processing or adjustments the first stack is the 16 minutes (4* 240") from two weeks ago and the second stack is the 90 minutes (30* 180") from last night. 

 

OplbzMn.jpg

 

Xwir5eW.jpg

 

What I observe in the 2nd, 90 minute stack: less nebula is visible in the unedited stack despite much longer integration time. The sky is far less washed out despite having a 70% moon (v 0% moon during the 16m stack) and heavy Sydney LP in both. Stars look like they have retained colour better, potentially over-exposed in the 16m stack using 4 minute exposures, and there are a lot less stars visible now too - due to the filter, lack of exposure, or both perhaps?

 

What I did find with the data gathered last night is that if I made any adjustments to the image itself within DSS (saturation, luminance etc) it would introduce horrible colour banding artifacts - as though the files were shot in 8bit - but when putting the untouched stack into Photoshop, no colour banding even after significant editing. I don't recall having this problem on previous images in DSS, even with far less integration time, which makes me wonder if DSS is struggling due to subs being too short or if something else was causing it?

 

Here is a quick edit of the 90 minute session. Considering this was really only the third time I've gotten out to actually shoot (several more attempts spoiled by weather) and really have no idea what I am doing on processing these shots I am quite pleased with the output of the Optolong shot, but do question whether I had long enough subs to begin with. 

 

MyE5ShW.jpg

 

I will keep working on the edit, but would love to hear some opinions on how to really nail the right exposure time in camera. Cheers


  • dan_hm likes this

#2 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 20,224
  • Joined: 27 Oct 2014

Posted 30 June 2020 - 10:58 PM

If you shoot too many too short subs, you get too much read noise.  Too few too long ones, you get limited dynamic range.  You need to strike a balance.

 

Here's one good method.  It's not trivial to figure out how to do it the first time, it is trivial thereafter.

 

Shoot a light and a bias.  Subtract (either the average value or the obvious skyfog peak, doesn't matter), get the corrected analog to digital units.  Using camera data, convert to electrons.  Get the read noise, which will be in electrons.  Square it.  You want the first number to be between 5-10X the second.

 

Where in the range is a tweak, not a big deal.  At that point you're in the ballpark, and total imaging time becomes the really important thing, _not_ subexposure time.

 

This book explains why that works.  Warning - math ahead.  <smile>  It also discusses some situations (rare) where it won't work, and what to do about those.

 

https://www.amazon.c...h/dp/1138055360

 

Obviously I've glossed over some things, like how you get the ADU.  There's much suitable software, I use PixInsight.  A key thing to look out for is that you either want to use astro specific programs, or terrestrial ones that do not stretch the data automatically.

 

Absolutely the most important thing.  This _has_ to be a site specific calculation.  There are so many factors in play that what others do is of very little use to you.  The good news is that it works for all sites, dark or bright, and compensates for changing Moon levels.

 

By the way, if those single subs are not stretched (which converting to jpgs can do) they look overexposed.  Too much there.


Edited by bobzeq25, 30 June 2020 - 11:06 PM.


#3 scadvice

scadvice

    Vanguard

  • *****
  • Vendors
  • Posts: 2,132
  • Joined: 20 Feb 2018
  • Loc: Lodi, California

Posted 30 June 2020 - 11:01 PM

Watch this see see if it helps:

 

https://www.youtube....i4k62eI&t=1436s


  • bobzeq25 likes this

#4 Craig_

Craig_

    Lift Off

  • -----
  • topic starter
  • Posts: 23
  • Joined: 15 May 2020

Posted 01 July 2020 - 12:19 AM

If you shoot too many too short subs, you get too much read noise.  Too few too long ones, you get limited dynamic range.  You need to strike a balance.

 

Here's one good method.  It's not trivial to figure out how to do it the first time, it is trivial thereafter.

 

Shoot a light and a bias.  Subtract (either the average value or the obvious skyfog peak, doesn't matter), get the corrected analog to digital units.  Using camera data, convert to electrons.  Get the read noise, which will be in electrons.  Square it.  You want the first number to be between 5-10X the second.

 

Where in the range is a tweak, not a big deal.  At that point you're in the ballpark, and total imaging time becomes the really important thing, _not_ subexposure time.

 

This book explains why that works.  Warning - math ahead.  <smile>  It also discusses some situations (rare) where it won't work, and what to do about those.

 

https://www.amazon.c...h/dp/1138055360

 

Obviously I've glossed over some things, like how you get the ADU.  There's much suitable software, I use PixInsight.  A key thing to look out for is that you either want to use astro specific programs, or terrestrial ones that do not stretch the data automatically.

 

Absolutely the most important thing.  This _has_ to be a site specific calculation.  There are so many factors in play that what others do is of very little use to you.  The good news is that it works for all sites, dark or bright, and compensates for changing Moon levels.

 

By the way, if those single subs are not stretched (which converting to jpgs can do) they look overexposed.  Too much there.

Thanks for the detailed response. Math was never my strong suit, but I will look into this approach! Thanks.

Regarding your comment on the single subs being overexposed - is that your view for all three single subs or only the later two (which are indeed considerably more exposed than the first?) 

They shouldn't be stretched, as all that was done was open them in DSS, and take a screenshot. 

 

Watch this see see if it helps:

 

https://www.youtube....i4k62eI&t=1436s

Thanks I will check it out!



#5 klaussius

klaussius

    Viking 1

  • *****
  • Posts: 549
  • Joined: 02 Apr 2019
  • Loc: Buenos Aires

Posted 01 July 2020 - 12:44 AM

180" or 240" for Carina sounds OK. Or, rather, on the high side. Carina is very bright and your camera has very low read noise, you may be OK exposing shorter subs.

 

Bob describes an accurate method, but it may be hard to fill in the blanks, especially at first when you don't even know the terminology. I know in my case it took me a while to make the connection between photons, electrons and ADUs, and some parts of that depends a lot on whether you find specs for your camera (the 533 does have precise specs published, so you're fine there) and which drivers you're using (some can scale ADUs from native sensor range to "normalized" 16-bit range, some don't, and it makes a difference).

 

But, in essence, that amount of precision isn't needed on low-noise cameras, because it's easy to "swamp the read noise" with short exposures.

 

So, if you're shopping for an easier, more "practical" description, here's my take on it:

 

If you're under heavy light pollution, a good shardstick to use is the background level, the average ADU level. Sky brightness will be your limiting factor with LP involved, so it will dictate exposure time, all the math around read noise notwithstanding.

 

You want the average ADU not to be too high, because it subtracts dynamic range. It depends on the site, but for me, placing it at about 1/3 of maximum ADU works well. That should be transferrable to other sites and equipment within limits.

 

On very dark sites, where LP is not as strong and the sky is very dark, you will not be able to make the average ADU as high without saturating lots of stars. So the other rule is to keep an eye on the number of saturated stars. Some software will show you this, some others won't. In any case, you can visually inspect a sub to figure this out. Saturating a few stars, especially around Carina, is hard to avoid. You'll have to decide every time whether you're interested in the stars or the nebulosity. If you're focusing on the nebulosity, a few saturated stars are OK - they're not your subject. If you're photographing a cluster, where stars are all, you better avoid saturating many of them.

 

Saturating stars hurts color. A few of the brightest ones may have to be sacrificed for the rest, but you don't want a lot of saturated pixels because that will turn all your stars white. Saturated pixels can only be white.

 

So... TL;DR is, 1/3 average ADU unless you get to a high saturated stars count, in which case as high as you can go without saturating a lot. In my software I count saturated pixels (not stars), and I allow only between 0.01 and 0.03% saturated pixels, depending on where my focus is (nebulosity vs stars). It still may be too many saturated pixels, I'm still learning.

 

About the filter, it's normal that adding a filter will subtract light. The trick the filter accomplishes is subtracting more light pollution than target. So, the L-enhance will make both the sky and the target dimmer, but it will (if successful, and it should work with Carina) make the sky a lot dimmer and the nebulosity only a little dimmer. The net effect is higher contrast, so less noise per sub. As a general rule of thumb, you need longer exposures with a filter to compensate the decreased image brightness. How much longer depends a lot on the filter. Narrowband filters may need a lot of exposure, several minutes (5, 10) is common, but light pollution filters aren't that strong, and a smaller increase is necessary, but not by much.

 

So, by decreasing the exposure when using the L-enhanced you went in the wrong direction. You want more, not less exposure, assuming you were well exposed without the filter (you probably were overexposed). But that doesn't matter. The rule I outlined above also works when using filters. The filter will make your average ADU lower, and your stars a little dimmer, but the overall contrast better (and thus the detail you'll see in a single, well exposed sub).

 

It's not uncommon to not be able to see any nebulosity in a single sub, only stacking reveals them. Carina should be visible, but only because it's so bright. Not all targets are as bright.


Edited by klaussius, 01 July 2020 - 12:52 AM.

  • SilverLitz likes this

#6 endlessky

endlessky

    Explorer 1

  • -----
  • Posts: 81
  • Joined: 24 May 2020

Posted 01 July 2020 - 02:34 AM

Hi, I have been trying to understand optimal sub-exposure length as well. In my case, I use an uncooled DSLR, so it probably would be a little harder to get accurate, repeatable results regarding the noise calculations, but eventually I am going to give it a try.

 

Up until this point, I have been following this "rule" I came across in my various readings about the topic and astrophotography in general: expose until the unstretched hystogram is completely detached from the left axis, leaving some good margin between the "left foot" of the hystogram (where it goes to zero) and the left axis, so that you can be sure you are not clipping any data. This way you can be quite sure that even if you would go higher with the exposure time, you would only shift the hystogram further right, without gaining any (or much) more details on the subject you are trying to capture. After that, usual rule apply: the more number of subs you take, the more total integration time, the better the result.

 

All this is fine and worked so far, but for me, good enough is not good enough. I want to optimize everything and I want to be sure I am not going any longer on one single sub that I absolutely need to. After all, every "n" seconds that I go longer than I need to, multiplied by "m" subs, at the end of a session could add up to "x" more subs that I could have had at my disposal to increase the total integration time. So I searched for answer and I found a very good series of articles, in this very website, that gives a scientific approach (just what I needed, I love math! - not joking) to the issue:

 

https://www.cloudyni...ng-for-photons/

 

All of them are a good read and give you tons of useful information, but I believe the one about measuring the specific values that you need for the sub-exposure time are in this part of the series:

 

https://www.cloudyni...ur-camera-r1929

 

As have been said above my post, these calculation are site specific (light pollution levels matter) and optical train specific, as well (light pollution filter vs no filter). The filter is blocking part of the light - the part that you do not want - and it allows you to take longer sub-exposures, to collect fainter details that otherwise would have been washed out by the background light polluted sky. So with a filter you should probably always go longer than without, everything else being equal.

 

This said, I think the end result you achieved is a very nice and beautiful image!



#7 sn2006gy

sn2006gy

    Vostok 1

  • -----
  • Posts: 138
  • Joined: 04 May 2020
  • Loc: Austin, TX

Posted 01 July 2020 - 02:48 AM

Its crazy complicated smile.gif  

 

Signal rate:

 

Your sky / light pollution has a signal rate

Your DSO you're trying to image has a signal rate.

 

Your filter will change the signal rate and relationship of sky/light pollution and DSO signal rate.

 

 

Mechanical stuff:

 

Sensor has read noise and dark current.

 

 

Diminishing returns:

 

It's all about signal. (signal to noise)

 

SNR has diminishing returns in regards to sub exposure length.

 

SNR has diminishing returns in regards to total integration time based. 

 

So about diminishing returns...

 

On our camera, it's about getting the most from sub exposure time and integration time as to not waste time chasing diminishing returns. In other words, go for maximum improvement  over minimal time (a balance there of in sub exposure and integration time) and stop chasing improvements that just won't be humanely possible to achieve for conditions.

 

Example Read noise Diminishing Returns

 

Our sensor is a low read noise.. Very low read noise. The 533 hits diminishing returns in 10-30 seconds in regards to read noise where as an old kodak CCD takes 10 minutes to hit diminishing returns. SO you will see some people with kodak imagers doing 10-15 minute exposures to overcome their read noise. We don't.

 

AS533:  3 seconds 1.36, 10 seconds 1.624, 30 seconds 1.732, 90 seconds 1.773, 300 seconds 1.788

 

Chasing read noise reduction for exposures over 30 seconds is "eh", over 90 seconds its moot. 

 

 

Sub exposure time

 

It's super hard to give examples here without exploring specific targets. But let's say we're imaging a very faint emission nebula DSO in a bortle 6 sky and we're dealing with nasty skyglow.

 

Without filter:

 

We'll hit diminishing returns in sub exposure in ~ 10 seconds... But we know we want 30 seconds to overcome read noise.  So 30 seconds subs is recommended.   Anything longer than this and our DSO signal has greater diminishing returns... you can take longer exposures and get more data but its probably best to go to a darker sky (lower skyglow signal) or try a filter or stack more shorter frames.  But don't forget, there is a diminishing return on stacking (integration) too! Longer exposures may get more data, but you may get more sky glow gradient than you want. balance.

 

With nebula filter:

 

With an an l-enhance filter, We won't hit diminishing returns on sub exposure until 300 seconds or so.  l-enhance is focused on dual band nebula emissions so the LP signal rate is absolutely reduced. (LP = light pollution).  Shoot for 300 seconds with this filter!  But, there is a trick i'll mention later... 

 

Integration time

 

This gets a little weird.. hard to show without a graph

 

Basically our bortle 6 sky glow integration time is pretty flat from the getgo.  1 hour is say 0.345 SNR - so we want more than an hour. 3 hours is 0.772 - still growing fast but darn, we're talking long integrations for weak signal.  10 hours is 1.092 - wowzers, 10 hours without a filter for 1.092? double that to 20 hours and now you're only at 1.5 - so you can see 10-15 hours integration may be what you need in these bad skies to still have mediocre SNR. That's a lot of integration time for faint objects!

 

Throw that l-enhance back on.

 

1 hour integration you have 1.75  you're already surpassing SNR of unfiltered at 20 hours!  at 5 hours you have 2.23, at 10 hours 3.13 and 20 hours 4.426 The base of the curve immediately starts much higher because the ratio of DSO signal to other signal is much higher.

 

More to come...

 

 

I'm actually working on describing some google docs and spreadsheets that have been posted to build the graphs for the 533 and share the more exact data then what i've abbreviated here for just explaining the quirks of this question. I'll try and update that and share it later, i already stayed up way to late smile.gif

 

Basically tl;dr

 

Camera read noise is about 10 seconds. 30 Seconds to be sure.  In a bad bortle sky like mine, diminishing returns kicks in around 30 seconds for unfiltered exposures of weak signals. If you chase brighter galaxies/stars and such - that does change all of this thus i used a weak reflection nebula to show how it can vary wildly and how a filter tuned for said nebula can really increase your signal exponentially.. The signal ratio is also much more in your favor the further you can get into darker skies

 

Plot twist

 

If you take more shorter exposures and dither every frame, you get some benefits that may outweigh chasing some of the further ends of diminishing returns.  Dithering helps you smooth out OSC mottle/noise/hot pixels. It allows us to calibrate frames with just lights, flats and flat-darks (simple). Dithering can also allow you to use the magic of bayer drizzle in pixinsight effectively reducing the resolution lost due to bayer matrix in our OSC cameras moving us closer to what Monos get.

 

So while you may want to do 300 second exposures with an l-enhance filter, you don't want to miss out on dithering - so you may take more subs than are needed to hit your target SNR for integration time just so you get the benefits of bayer drizzle, noise reduction, easy calibration and such.

 

If you do lots of short non filtered subs, you may just need to dither every 3 frames as well... experiment.


Edited by sn2006gy, 01 July 2020 - 02:53 AM.

  • mtc and SilverLitz like this

#8 Umasscrew39

Umasscrew39

    Apollo

  • *****
  • Moderators
  • Posts: 1,082
  • Joined: 07 Dec 2016
  • Loc: Central Florida

Posted 01 July 2020 - 02:57 AM

There is no reason to have any of this overwhelm you or seem overly complicated.

 

If math is not your strong suit or you have no desire to calculate it, use the Smart Histogram in SharpCap.  It gives you all of the information you are asking for by doing the calculations behind the scenes.  Robin Glover developed this and his You Tube videos here and here explain this very well.  He goes through the math and then explains how the Smart Histogram works.

 

 

BTW- I use the ASI533MC with the L-eNhance filter like you are: a gain of 200; 120s exposures in my Bortle 6 skies works great but the Smart Histogram will fine tune things for you in your location.



#9 Craig_

Craig_

    Lift Off

  • -----
  • topic starter
  • Posts: 23
  • Joined: 15 May 2020

Posted 01 July 2020 - 04:25 AM

180" or 240" for Carina sounds OK. Or, rather, on the high side. Carina is very bright and your camera has very low read noise, you may be OK exposing shorter subs.

 

Bob describes an accurate method, but it may be hard to fill in the blanks, especially at first when you don't even know the terminology. I know in my case it took me a while to make the connection between photons, electrons and ADUs, and some parts of that depends a lot on whether you find specs for your camera (the 533 does have precise specs published, so you're fine there) and which drivers you're using (some can scale ADUs from native sensor range to "normalized" 16-bit range, some don't, and it makes a difference).

 

But, in essence, that amount of precision isn't needed on low-noise cameras, because it's easy to "swamp the read noise" with short exposures.

 

So, if you're shopping for an easier, more "practical" description, here's my take on it:

 

If you're under heavy light pollution, a good shardstick to use is the background level, the average ADU level. Sky brightness will be your limiting factor with LP involved, so it will dictate exposure time, all the math around read noise notwithstanding.

 

You want the average ADU not to be too high, because it subtracts dynamic range. It depends on the site, but for me, placing it at about 1/3 of maximum ADU works well. That should be transferrable to other sites and equipment within limits.

 

On very dark sites, where LP is not as strong and the sky is very dark, you will not be able to make the average ADU as high without saturating lots of stars. So the other rule is to keep an eye on the number of saturated stars. Some software will show you this, some others won't. In any case, you can visually inspect a sub to figure this out. Saturating a few stars, especially around Carina, is hard to avoid. You'll have to decide every time whether you're interested in the stars or the nebulosity. If you're focusing on the nebulosity, a few saturated stars are OK - they're not your subject. If you're photographing a cluster, where stars are all, you better avoid saturating many of them.

 

Saturating stars hurts color. A few of the brightest ones may have to be sacrificed for the rest, but you don't want a lot of saturated pixels because that will turn all your stars white. Saturated pixels can only be white.

 

So... TL;DR is, 1/3 average ADU unless you get to a high saturated stars count, in which case as high as you can go without saturating a lot. In my software I count saturated pixels (not stars), and I allow only between 0.01 and 0.03% saturated pixels, depending on where my focus is (nebulosity vs stars). It still may be too many saturated pixels, I'm still learning.

 

About the filter, it's normal that adding a filter will subtract light. The trick the filter accomplishes is subtracting more light pollution than target. So, the L-enhance will make both the sky and the target dimmer, but it will (if successful, and it should work with Carina) make the sky a lot dimmer and the nebulosity only a little dimmer. The net effect is higher contrast, so less noise per sub. As a general rule of thumb, you need longer exposures with a filter to compensate the decreased image brightness. How much longer depends a lot on the filter. Narrowband filters may need a lot of exposure, several minutes (5, 10) is common, but light pollution filters aren't that strong, and a smaller increase is necessary, but not by much.

 

So, by decreasing the exposure when using the L-enhanced you went in the wrong direction. You want more, not less exposure, assuming you were well exposed without the filter (you probably were overexposed). But that doesn't matter. The rule I outlined above also works when using filters. The filter will make your average ADU lower, and your stars a little dimmer, but the overall contrast better (and thus the detail you'll see in a single, well exposed sub).

 

It's not uncommon to not be able to see any nebulosity in a single sub, only stacking reveals them. Carina should be visible, but only because it's so bright. Not all targets are as bright.

Thanks for the reply. Some fantastic detail there for me to work through. I suspect you are correct around moving the exposure in the wrong direction when adding the Optolong. I had felt I was getting too many saturated stars at 4m without it, so decreased to 3m, but didn't account for the loss of light the filter would impose. 

 

Hi, I have been trying to understand optimal sub-exposure length as well. In my case, I use an uncooled DSLR, so it probably would be a little harder to get accurate, repeatable results regarding the noise calculations, but eventually I am going to give it a try.

 

Up until this point, I have been following this "rule" I came across in my various readings about the topic and astrophotography in general: expose until the unstretched hystogram is completely detached from the left axis, leaving some good margin between the "left foot" of the hystogram (where it goes to zero) and the left axis, so that you can be sure you are not clipping any data. This way you can be quite sure that even if you would go higher with the exposure time, you would only shift the hystogram further right, without gaining any (or much) more details on the subject you are trying to capture. After that, usual rule apply: the more number of subs you take, the more total integration time, the better the result.

 

All this is fine and worked so far, but for me, good enough is not good enough. I want to optimize everything and I want to be sure I am not going any longer on one single sub that I absolutely need to. After all, every "n" seconds that I go longer than I need to, multiplied by "m" subs, at the end of a session could add up to "x" more subs that I could have had at my disposal to increase the total integration time. So I searched for answer and I found a very good series of articles, in this very website, that gives a scientific approach (just what I needed, I love math! - not joking) to the issue:

 

https://www.cloudyni...ng-for-photons/

 

All of them are a good read and give you tons of useful information, but I believe the one about measuring the specific values that you need for the sub-exposure time are in this part of the series:

 

https://www.cloudyni...ur-camera-r1929

 

As have been said above my post, these calculation are site specific (light pollution levels matter) and optical train specific, as well (light pollution filter vs no filter). The filter is blocking part of the light - the part that you do not want - and it allows you to take longer sub-exposures, to collect fainter details that otherwise would have been washed out by the background light polluted sky. So with a filter you should probably always go longer than without, everything else being equal.

 

This said, I think the end result you achieved is a very nice and beautiful image!

Thanks for the reply. Great info. I am shooting in APT for now (wouldn't mind learning SGP at some stage though) but must admit I am not familar with reading the histogram in that tool, unlike say, Photoshop. I will need to read up on that a bit more.

 

Its crazy complicated smile.gif  

 

Signal rate:

 

Your sky / light pollution has a signal rate

Your DSO you're trying to image has a signal rate.

 

Your filter will change the signal rate and relationship of sky/light pollution and DSO signal rate.

 

 

Mechanical stuff:

 

Sensor has read noise and dark current.

 

 

Diminishing returns:

 

It's all about signal. (signal to noise)

 

SNR has diminishing returns in regards to sub exposure length.

 

SNR has diminishing returns in regards to total integration time based. 

 

So about diminishing returns...

 

On our camera, it's about getting the most from sub exposure time and integration time as to not waste time chasing diminishing returns. In other words, go for maximum improvement  over minimal time (a balance there of in sub exposure and integration time) and stop chasing improvements that just won't be humanely possible to achieve for conditions.

 

Example Read noise Diminishing Returns

 

Our sensor is a low read noise.. Very low read noise. The 533 hits diminishing returns in 10-30 seconds in regards to read noise where as an old kodak CCD takes 10 minutes to hit diminishing returns. SO you will see some people with kodak imagers doing 10-15 minute exposures to overcome their read noise. We don't.

 

AS533:  3 seconds 1.36, 10 seconds 1.624, 30 seconds 1.732, 90 seconds 1.773, 300 seconds 1.788

 

Chasing read noise reduction for exposures over 30 seconds is "eh", over 90 seconds its moot. 

 

 

Sub exposure time

 

It's super hard to give examples here without exploring specific targets. But let's say we're imaging a very faint emission nebula DSO in a bortle 6 sky and we're dealing with nasty skyglow.

 

Without filter:

 

We'll hit diminishing returns in sub exposure in ~ 10 seconds... But we know we want 30 seconds to overcome read noise.  So 30 seconds subs is recommended.   Anything longer than this and our DSO signal has greater diminishing returns... you can take longer exposures and get more data but its probably best to go to a darker sky (lower skyglow signal) or try a filter or stack more shorter frames.  But don't forget, there is a diminishing return on stacking (integration) too! Longer exposures may get more data, but you may get more sky glow gradient than you want. balance.

 

With nebula filter:

 

With an an l-enhance filter, We won't hit diminishing returns on sub exposure until 300 seconds or so.  l-enhance is focused on dual band nebula emissions so the LP signal rate is absolutely reduced. (LP = light pollution).  Shoot for 300 seconds with this filter!  But, there is a trick i'll mention later... 

 

Integration time

 

This gets a little weird.. hard to show without a graph

 

Basically our bortle 6 sky glow integration time is pretty flat from the getgo.  1 hour is say 0.345 SNR - so we want more than an hour. 3 hours is 0.772 - still growing fast but darn, we're talking long integrations for weak signal.  10 hours is 1.092 - wowzers, 10 hours without a filter for 1.092? double that to 20 hours and now you're only at 1.5 - so you can see 10-15 hours integration may be what you need in these bad skies to still have mediocre SNR. That's a lot of integration time for faint objects!

 

Throw that l-enhance back on.

 

1 hour integration you have 1.75  you're already surpassing SNR of unfiltered at 20 hours!  at 5 hours you have 2.23, at 10 hours 3.13 and 20 hours 4.426 The base of the curve immediately starts much higher because the ratio of DSO signal to other signal is much higher.

 

More to come...

 

 

I'm actually working on describing some google docs and spreadsheets that have been posted to build the graphs for the 533 and share the more exact data then what i've abbreviated here for just explaining the quirks of this question. I'll try and update that and share it later, i already stayed up way to late smile.gif

 

Basically tl;dr

 

Camera read noise is about 10 seconds. 30 Seconds to be sure.  In a bad bortle sky like mine, diminishing returns kicks in around 30 seconds for unfiltered exposures of weak signals. If you chase brighter galaxies/stars and such - that does change all of this thus i used a weak reflection nebula to show how it can vary wildly and how a filter tuned for said nebula can really increase your signal exponentially.. The signal ratio is also much more in your favor the further you can get into darker skies

 

Plot twist

 

If you take more shorter exposures and dither every frame, you get some benefits that may outweigh chasing some of the further ends of diminishing returns.  Dithering helps you smooth out OSC mottle/noise/hot pixels. It allows us to calibrate frames with just lights, flats and flat-darks (simple). Dithering can also allow you to use the magic of bayer drizzle in pixinsight effectively reducing the resolution lost due to bayer matrix in our OSC cameras moving us closer to what Monos get.

 

So while you may want to do 300 second exposures with an l-enhance filter, you don't want to miss out on dithering - so you may take more subs than are needed to hit your target SNR for integration time just so you get the benefits of bayer drizzle, noise reduction, easy calibration and such.

 

If you do lots of short non filtered subs, you may just need to dither every 3 frames as well... experiment.

Thanks for all that info. Really appreciate it. I have a lot of reading and experimenting to do smile.gif

 

There is no reason to have any of this overwhelm you or seem overly complicated.

 

If math is not your strong suit or you have no desire to calculate it, use the Smart Histogram in SharpCap.  It gives you all of the information you are asking for by doing the calculations behind the scenes.  Robin Glover developed this and his You Tube videos here and here explain this very well.  He goes through the math and then explains how the Smart Histogram works.

 

 

BTW- I use the ASI533MC with the L-eNhance filter like you are: a gain of 200; 120s exposures in my Bortle 6 skies works great but the Smart Histogram will fine tune things for you in your location.

Thanks for the tip. Do you have to use Sharpcap for imaging to use the Smart Histogram or can you bring a .FITS file in from another app? I am shooting in APT for now, as it is all I know currently.

Out of interest, you say you use a gain of 200 with the 533, what benefit does this offer you over the unity gain setting?

 

--------

 

One thing I do really like about the image I took last night, compared to the earlier attempt, is how few stars there are in comparison. I am unsure if this is based on my exposure or the filter cutting them out - can anyone shed any light? 


Edited by Craig_, 01 July 2020 - 04:31 AM.


#10 endlessky

endlessky

    Explorer 1

  • -----
  • Posts: 81
  • Joined: 24 May 2020

Posted 01 July 2020 - 04:42 AM

One thing I do really like about the image I took last night, compared to the earlier attempt, is how few stars there are in comparison. I am unsure if this is based on my exposure or the filter cutting them out - can anyone shed any light? 

Probably the filter cutting some light and making the stars appear smaller. I like that effect, too! I always try to "minimize" the stars in post-processing, to make the nebulae stand out more (the whole point of shooting a nebula is because it is the main target, after all, not the stars). Some people go to the extreme of using StarNet++ to remove the stars completely, but that's too unnatural for me. Smaller stars are still there, just smaller!


Edited by endlessky, 01 July 2020 - 04:42 AM.

  • cosmo59 likes this

#11 Umasscrew39

Umasscrew39

    Apollo

  • *****
  • Moderators
  • Posts: 1,082
  • Joined: 07 Dec 2016
  • Loc: Central Florida

Posted 01 July 2020 - 07:01 AM

 

Thanks for the tip. Do you have to use Sharpcap for imaging to use the Smart Histogram or can you bring a .FITS file in from another app? I am shooting in APT for now, as it is all I know currently.

 

You can use SharpCap for imaging but if you like APT just stick with it. To use the Smart Histogram just connect your camera to it.  The documentation on the SharpCap website as well as the videos I told you about explain its use.  It does not take long .   

 

Out of interest, you say you use a gain of 200 with the 533, what benefit does this offer you over the unity gain setting?

 

Nothing special about unity gain.  I just look for the best settings that will give me the maximum dynamic range with the least noise.  Even the Smart Histogram isn't gospel truth.  At the end of all of the tinkering, look at your image and if you like the results, those are your best settings.  No need to overthink or overanalyze this stuff.

 

 



#12 klaussius

klaussius

    Viking 1

  • *****
  • Posts: 549
  • Joined: 02 Apr 2019
  • Loc: Buenos Aires

Posted 01 July 2020 - 09:35 AM

Probably the filter cutting some light and making the stars appear smaller. I like that effect, too! I always try to "minimize" the stars in post-processing, to make the nebulae stand out more (the whole point of shooting a nebula is because it is the main target, after all, not the stars). Some people go to the extreme of using StarNet++ to remove the stars completely, but that's too unnatural for me. Smaller stars are still there, just smaller!

Yes, LP filters do that. Sometimes, I use a LP filter just for that effect, because some nebulae are bathed in a sea of stars that quite effectively hides the nebula. The LP filter makes a good job of reducing the stars, simplifying processing.

 

Star reduction in post is a possibility, but leaves too many artifacts for my taste.



#13 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 20,224
  • Joined: 27 Oct 2014

Posted 01 July 2020 - 10:13 AM

 

So, if you're shopping for an easier, more "practical" description, here's my take on it:

 

If you're under heavy light pollution, a good shardstick to use is the background level, the average ADU level. Sky brightness will be your limiting factor with LP involved, so it will dictate exposure time, all the math around read noise notwithstanding.

 

You want the average ADU not to be too high, because it subtracts dynamic range. It depends on the site, but for me, placing it at about 1/3 of maximum ADU works well. That should be transferrable to other sites and equipment within limits.

VERY big deal here.  There are two kinds of histograms.  Linear, as it comes from the camera.  Stretched, as it appears on the back of a DSLR.  They are _completely_ different.

 

Setting the skyfog peak (or the average, doesn't matter) 1/3 over is a rule that ONLY applies to stretched histograms.  Generally as stretched by a DSLR.  There are different degrees of stretch.

 

For a linear histogram, 1/3 over would be crazy overexposed.  It's more like 1-2% over.

 

As to whether the subs are overexposed.  Can't say exactly, because they may have been stretched.  DSS in particular tends to do that by itself, even though you never want it to.   But if they're linear, all are overexposed.

 

The method I suggested is not as complicated as it may seem.  It just takes some research, some software.  It's trivial to repeat, once you've done it.

 

And it's as complicated as it is for good reasons.  They're not tweaks.


Edited by bobzeq25, 01 July 2020 - 10:17 AM.


#14 Madratter

Madratter

    Voyager 1

  • *****
  • Posts: 12,480
  • Joined: 14 Jan 2013

Posted 01 July 2020 - 10:30 AM

For those who like avoiding the math but have PixInsight it is easy to determine if you are sky limited or not by using the Mure Denoise Script. You will need to properly characterize your sensor (by using the MureDenoiseDetectorSettings script).

 

I go into this in part of my PixInsight Tutorial on MureDenoise.

 

Bottom line is make sure you check "Generate Method Noise Image" and then check the console afterwards. The Tooltip for "Generate Method Noise Image" goes into what this means and what you want. Suffice it to say that numbers up to about 20% lose very little in terms of efficiency, despite the 10% cited in the tooltip. I go into that in my tutorial. Basically anything between 5x and 10x is going to be just fine and you lose little in terms of time efficiency.

 

MureDenoise Method Noise 20200701.PNG

 

For those further interested, see:

 

https://astroimages....re-denoise.html


Edited by Madratter, 01 July 2020 - 01:48 PM.


#15 klaussius

klaussius

    Viking 1

  • *****
  • Posts: 549
  • Joined: 02 Apr 2019
  • Loc: Buenos Aires

Posted 01 July 2020 - 11:18 AM

VERY big deal here.  There are two kinds of histograms.  Linear, as it comes from the camera.  Stretched, as it appears on the back of a DSLR.  They are _completely_ different.

 

Setting the skyfog peak (or the average, doesn't matter) 1/3 over is a rule that ONLY applies to stretched histograms.  Generally as stretched by a DSLR.  There are different degrees of stretch.

 

For a linear histogram, 1/3 over would be crazy overexposed.  It's more like 1-2% over.

 

I compute ADUs in linear. It's a 14-bit sensor, so top ADU is 16383, but it has a bias of 2K, so it's more like 14K. I aim to place the average ADU between 2K and 4K. That's a linear unstretched ADU count.

 

It's true that the subs look way overexposed if loaded into a photo app, but the RAW data is there and it's fine.

 

It depends a lot on sky conditions, but in Bortle 9, if I place the skyfog at 1-2% I get nothing. With an Ha filter it's different, the histogram leans a lot more heavily to the left, that's why I mentioned saturated pixels as well.

 

The 1/3 skyfog assumes a sky limited exposure. But if the sky isn't the limit, the stars will be, as they will start saturating.

 

I should probably add that I use 1/3 as an upper bound. I try not to go over it, it's ok to be a bit under. The rationale is that losing 1/3rd of the ADU range will remove less than a bit of dynamic range, so I'm ok with that.

 

Something I didn't mention and that I pay close attention to, is the std deviation in a bias shot. In my sensor, at the ISO I tend to use more often, it's around 180 ADU. I guess that is quite representative of read noise. So an ADU of 2K is about 10x the read noise. With an Ha filter I can rarely get that high without saturating lots of stars, so I settle for lower numbers. They work fine, because with an Ha filter there's less skyfog, and the average ADU is more a measure of signal strength than skyfog strength.


Edited by klaussius, 01 July 2020 - 11:20 AM.


#16 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 20,224
  • Joined: 27 Oct 2014

Posted 01 July 2020 - 10:49 PM

I compute ADUs in linear. It's a 14-bit sensor, so top ADU is 16383, but it has a bias of 2K, so it's more like 14K. I aim to place the average ADU between 2K and 4K. That's a linear unstretched ADU count.

If that's a 14 bit level for a typical modern CMOS camera (ie low read noise), that's _far_ higher than I'd go. 

 

In 14 bit, I'd want the light to be something like 500 ADU.  Varies some depending on offset (bias) and gain, 500 is for about unity gain, and standard offset.

 

Here's a key point.  It's for a different CMOS camera, but it applies.

 

"Don't go by the visual appearance of a sub – short broadband subs with the 183 may look very thin, but when stacked, the final result will be fine"

 

Not doing it by eye is key.  As with many things in DSO AP, things are just not intuitive.  You go by the numbers.


Edited by bobzeq25, 01 July 2020 - 10:51 PM.

  • mtc likes this

#17 klaussius

klaussius

    Viking 1

  • *****
  • Posts: 549
  • Joined: 02 Apr 2019
  • Loc: Buenos Aires

Posted 02 July 2020 - 12:04 AM

If that's a 14 bit level for a typical modern CMOS camera (ie low read noise), that's _far_ higher than I'd go.

It's not low read noise, by a long shot. It's an uncooled, unmodded DSLR (Canon 650D). It's very noisy.

 

I might give 500 or at least a lower ADU number a chance, but I haven't had good experiences in the past, when I tried.



#18 sharkmelley

sharkmelley

    Gemini

  • *****
  • Posts: 3,225
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 02 July 2020 - 02:22 AM

Something I didn't mention and that I pay close attention to, is the std deviation in a bias shot. In my sensor, at the ISO I tend to use more often, it's around 180 ADU. I guess that is quite representative of read noise. So an ADU of 2K is about 10x the read noise. 

Read noise of 180ADU !! 

Which ISO are you using?

 

Mark



#19 klaussius

klaussius

    Viking 1

  • *****
  • Posts: 549
  • Joined: 02 Apr 2019
  • Loc: Buenos Aires

Posted 02 July 2020 - 03:26 AM

Read noise of 180ADU !! 

Which ISO are you using?

 

Mark

Either 800 or 1600, depending on the target. Usually 800.

 

I just grabbed a random dark flat just in case I remember wrong, and the std dev on the raw data gives 164 ADU.

 

But, you know, you made me rethink this. Maybe that's not the read noise though. Taking the difference with another dark, the std dev on the difference gives 25 ADU. Maybe the 164 ADU is the banding, which this sensor has in spades.


Edited by klaussius, 02 July 2020 - 03:32 AM.


#20 sharkmelley

sharkmelley

    Gemini

  • *****
  • Posts: 3,225
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 02 July 2020 - 03:44 AM

Either 800 or 1600, depending on the target. Usually 800.

 

I just grabbed a random dark flat just in case I remember wrong, and the std dev on the raw data gives 164 ADU.

 

But, you know, you made me rethink this. Maybe that's not the read noise though. Taking the difference with another dark, the std dev on the difference gives 25 ADU. Maybe the 164 ADU is the banding, which this sensor has in spades.

The high figure of 164ADU could be caused by banding or hot pixels or similar.  Use the low figure instead.

 

This is an interesting example of how a simple rule of thumb can lead to the wrong conclusion.

 

Mark



#21 Craig_

Craig_

    Lift Off

  • -----
  • topic starter
  • Posts: 23
  • Joined: 15 May 2020

Posted 02 July 2020 - 06:43 AM

Probably the filter cutting some light and making the stars appear smaller. I like that effect, too! I always try to "minimize" the stars in post-processing, to make the nebulae stand out more (the whole point of shooting a nebula is because it is the main target, after all, not the stars). Some people go to the extreme of using StarNet++ to remove the stars completely, but that's too unnatural for me. Smaller stars are still there, just smaller!

Good to know, I'll make sure I use it more :) Not a fan of the sea of stars I was getting without it. Didn't have much luck with StarNet++ to remove them either. I find the dust and scratches filter in Photoshop, with the radius and threshold tweaked to suit the image, does a good job at toning them down though.

 

 

 

 

Thanks for the tip. Do you have to use Sharpcap for imaging to use the Smart Histogram or can you bring a .FITS file in from another app? I am shooting in APT for now, as it is all I know currently.

 

You can use SharpCap for imaging but if you like APT just stick with it. To use the Smart Histogram just connect your camera to it.  The documentation on the SharpCap website as well as the videos I told you about explain its use.  It does not take long .   

 

Out of interest, you say you use a gain of 200 with the 533, what benefit does this offer you over the unity gain setting?

 

Nothing special about unity gain.  I just look for the best settings that will give me the maximum dynamic range with the least noise.  Even the Smart Histogram isn't gospel truth.  At the end of all of the tinkering, look at your image and if you like the results, those are your best settings.  No need to overthink or overanalyze this stuff.

 

 

 

Thanks I will make sure to watch the videos before my next shoot. Coming into full moon now so plenty of time to learn!

 

For those who like avoiding the math but have PixInsight it is easy to determine if you are sky limited or not by using the Mure Denoise Script. You will need to properly characterize your sensor (by using the MureDenoiseDetectorSettings script).

 

I go into this in part of my PixInsight Tutorial on MureDenoise.

 

Bottom line is make sure you check "Generate Method Noise Image" and then check the console afterwards. The Tooltip for "Generate Method Noise Image" goes into what this means and what you want. Suffice it to say that numbers up to about 20% lose very little in terms of efficiency, despite the 10% cited in the tooltip. I go into that in my tutorial. Basically anything between 5x and 10x is going to be just fine and you lose little in terms of time efficiency.

 

attachicon.gifMureDenoise Method Noise 20200701.PNG

 

For those further interested, see:

 

https://astroimages....re-denoise.html

I'll check this out, thanks. I have a trial license of PixInsight active currently, unsure if I will buy a full license afterwards or not though. I've been pretty happy with Photoshop and it is far more familiar to me coming from a normal photography background.

 

After some further tweaking, this is where my image is currently at:

 

7jD6yvP.jpg


  • klaussius likes this

#22 Madratter

Madratter

    Voyager 1

  • *****
  • Posts: 12,480
  • Joined: 14 Jan 2013

Posted 02 July 2020 - 07:46 AM

As for PI vs Photoshop, it doesn't have to be either or.

 

I would say that in general that they cater to different ways of thinking about how you process an image.

 

In PI, you apply processes to an image to achieve specific goals that end up creating a finished image.

In Photoshop, you use layers to adjust various aspects of the image.

 

I'm quite familiar with both. I do most of my processing in PI but occasionally will do a few specific things in Photoshop.

 

I and many others have PI tutorials. Learning it isn't as daunting a task as it was 4 or 5 years ago when there was far less material available to do so.



#23 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 20,224
  • Joined: 27 Oct 2014

Posted 03 July 2020 - 11:00 AM

It's not low read noise, by a long shot. It's an uncooled, unmodded DSLR (Canon 650D). It's very noisy.

 

I might give 500 or at least a lower ADU number a chance, but I haven't had good experiences in the past, when I tried.

OK, an older DSLR is completely different, and your levels make sense.


Edited by bobzeq25, 04 July 2020 - 09:09 AM.


#24 Craig_

Craig_

    Lift Off

  • -----
  • topic starter
  • Posts: 23
  • Joined: 15 May 2020

Posted 03 July 2020 - 10:52 PM

As for PI vs Photoshop, it doesn't have to be either or.

 

I would say that in general that they cater to different ways of thinking about how you process an image.

 

In PI, you apply processes to an image to achieve specific goals that end up creating a finished image.

In Photoshop, you use layers to adjust various aspects of the image.

 

I'm quite familiar with both. I do most of my processing in PI but occasionally will do a few specific things in Photoshop.

 

I and many others have PI tutorials. Learning it isn't as daunting a task as it was 4 or 5 years ago when there was far less material available to do so.

Thanks for the info. I've been reading your website and am certainly learning from it, cheers!




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics