Jump to content


Photo

5 or 30 minute exposures for narrowband?

  • Please log in to reply
239 replies to this topic

#26 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 29 January 2013 - 09:40 AM

See Tony's research here;
http://www.astrosurf...oc/astromag.pdf

You can also email him at tonyhallas@sebastiancorp.net
He's very gracious and responds quickly.

#27 Alex McConahay

Alex McConahay

    Vanguard

  • -----
  • Posts: 2314
  • Joined: 11 Aug 2008
  • Loc: Moreno Valley, CA

Posted 29 January 2013 - 10:29 AM

I think the number 16 is taken as a cutoff point after which there is little APPRECIABLE advantage----not that there is none at all.

The point is that to see an appreciable reduction in noise, you have to go from sixteen to thirty two exposures--so you are at the point of diminishing returns, and it just is not worth it.

Alex

#28 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 29 January 2013 - 10:58 AM

Thanks for the link Mike. I definitely wish I knew more about this stuff. I know as far as sampling goes, random noise should end up as 1/sqrt(n) where n is the number of exposures.

So at 4 exposures, you'd have half the noise of 1 exposure. To get to half the noise of 4, you'd need 16. But, then to start really seeing an improvement, it does drift off, to cut that remaining noise in half, you'd then need to go to 64, and to cut that in half, you'd need to go to 256 etc...

It looks like the example is for dark noise, but the same applies to any unbiased normally distributed random noise, so, read noise etc... in theory anyways...

It looks like he uses "8" as a start value, I'm not sure what units, but the scale will be larger with a larger starting noise. I think in practice, there are lots of factors, such as sky fog, dynamic range of computer monitors, pixel size, tracking, seeing etc... that will limit real-world appreciation of gains after some point. I know though, you can see a gain after 16, as I've added integration to images, taken previous night for example, and watched the images improve. It does definitely take more and more to see an improvement as you go, and if the image is good enough to begin with, the improvement certainly may not be perceivable.

#29 Tandum

Tandum

    Mariner 2

  • -----
  • Posts: 242
  • Joined: 01 Mar 2010
  • Loc: Brisbane, Australia

Posted 29 January 2013 - 11:54 AM

My 2 cents worth. Noise is something you deal with after you have signal. I'm sure an f3 lens gives you lots of signal quickly with little noise but we don't all use f3 lenses.

Capture what you can and if it's noisey get more, simple as that. The longer the exposure the deeper you will see. My exposures are limited by polar alignment mainly. Longer than 15 minutes and I see rotation.

Here's an example with noise. Weather stopped getting more data but it still looks ok.

#30 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 29 January 2013 - 12:36 PM

Noise is something you deal with after you have signal.


Exactly. One thing to point out though is that, the accepted theory is that signal increases with time linearly. So, if your using the same equipment, the signal will be the same if you have 1 60 minute exposure, or sum 60 1 minute exposures. The remaining difference between 60 1 minute exposures and 1 6 min, is noise. Different sources of noise, can have different properties and can be best eliminated with different methods. If you reduce one source of noise to negligible levels, but still don't have a nearly perfect representation of the signal, then there must be another source of noise that hasn't been reduced sufficiently.

Of course, I honestly don't know much about all the sources of noise with this regard and how they all fit together. And since they say this hobby is all signal to noise, and we can see that we can write signal out of the equation... all I really know is I don't really know much of anything :tonofbricks:

#31 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 29 January 2013 - 12:53 PM

Anyways though, still in practice, you may well be better off taking longer exposures wot the low e- cameras, but as mentioned, I'm sure there a lot of factors to consider, even without considering camera noise models.


Hi-

I think your note sums up key points - and mentions the key word, "asymptote". The basic noise model used in this context - by everyone basically - is that shot noise in the signal and in the sky glow depends only on total integration time, while the read noise is independent of integration time and only depends on the number of exposures. There are other noise terms, but this works well for typical imaging sessions involving a small number of exposures - and it completely ignores other factors such as guiding and field rotation.

A more practical view would be to consider how much sky glow signal you are getting - and that is heavily dependent on f/ratio. At f/15 with narrowband, you will need much longer sub-exposures than f/2 hyperstar in order to make read noise unimportant.

What is "unimportant"? Well - as you make the sub-exposures longer (and fix the total integration time) there will be an improvement as you reduce the impact of read noise - but at some point you won't notice an improvement anymore - and instead your image goes bad due to other factors such as guiding and field rotation.

For anyone interested in this stuff there are many web sites that describe it reasonably well - and refer to the same basic noise model. Some web pages get misinterpreted and make people think you *need* to go longer at a dark site, for example, when in fact they are just saying you *can* go longer and still get benefit - because the sky glow is small.

I think the example given earlier by Dawziecat is great and should make several points. You can do very well with short exposures if you have a fast system - even with 3nm filters. His example stacks a very large number of subs - and I think the point of diminishing returns would be met much earlier - but the key point is that you would need to know the read noise and sky glow to determine how much of a win there is in using very long sub-exposures.

If there are any web pages or people who disagree with this my assessment here - I'm happy to address them.

Frank

#32 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 29 January 2013 - 01:09 PM

Thanks for the link Mike. I definitely wish I knew more about this stuff. I know as far as sampling goes, random noise should end up as 1/sqrt(n) where n is the number of exposures.

So at 4 exposures, you'd have half the noise of 1 exposure. To get to half the noise of 4, you'd need 16. But, then to start really seeing an improvement, it does drift off, to cut that remaining noise in half, you'd then need to go to 64, and to cut that in half, you'd need to go to 256 etc...

It looks like the example is for dark noise, but the same applies to any unbiased normally distributed random noise, so, read noise etc... in theory anyways...

It looks like he uses "8" as a start value, I'm not sure what units, but the scale will be larger with a larger starting noise. I think in practice, there are lots of factors, such as sky fog, dynamic range of computer monitors, pixel size, tracking, seeing etc... that will limit real-world appreciation of gains after some point. I know though, you can see a gain after 16, as I've added integration to images, taken previous night for example, and watched the images improve. It does definitely take more and more to see an improvement as you go, and if the image is good enough to begin with, the improvement certainly may not be perceivable.


Since most stacking programs "average" the images you might see an improvement in image quality as the new images may be of higher quality than the previous. I think the result would be even better though if you again measured all images and deleted all but the best 16. I do not profess to have any understanding of the science behind this but I can assure you the folks agreeing with Tony's findings are some of the top imagers in the country. I did ask Tony if that theory applied to light frames in person at PATS this year. The answer was yes.

#33 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 29 January 2013 - 01:18 PM

[/quote]
Some web pages get misinterpreted and make people think you *need* to go longer at a dark site, for example, when in fact they are just saying you *can* go longer and still get benefit - because the sky glow is small.

Frank [/quote]

I was told directly by John Smith at AIC that indeed you do need to go longer at a dark site, especially with narrowband. John is an expert in this field. For those who do not know John, you can find his many useful articles and tips here; http://www.hiddenloft.com/

#34 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 29 January 2013 - 01:33 PM

I was told directly by John Smith at AIC that indeed you do need to go longer at a dark site, especially with narrowband.



Yes - I think you misunderstood what he was telling you. I would be very surprised if he, or anyone familiar with the underlying noise model, would say you need to use longer subs at a dark site to achieve the same quality in the same time as at a bright site. The message he was probably trying to convey is - you *can* and you *should* go longer - and your results would be even better at a dark site.

The benefit you get from longer subs depends on read noise and sky glow. Short subs at a dark site will end up with a much better image than the same subs at a bright site. But going longer at a bright site won't help much and isn't really worth it - whereas at a dark site it is worth it.

It all comes down to knowing the noise terms - which in this case is read noise and sky glow - for your camera, equipment, and location.

Frank

#35 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 29 January 2013 - 01:41 PM

I don't think I "misunderstood him" as I was surprised to hear him say that as I thought a dark site meant I could go shorter. He was helping me evaluate my images. I will find out, but it had to do with sky glow "hiding noise", and since sky glow is "signal" I had very little to hide any in noise in the background and therefore needed longer images to overcome that noise. I was shooting 15 Min LRGB and 30 min NB.

#36 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 29 January 2013 - 01:46 PM

For a great understanding of the benefit of longer images and why they are much better than multiple short images, John has an easy to understand writeup here

http://www.hiddenlof...m/notes/SNR.txt

#37 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 29 January 2013 - 01:52 PM

From what you are describing, it sounds like he assessed the noise in your subs and deduced that the sky glow was small compared to your read noise - and you would benefit from longer exposures. That sounds perfectly fine and I would probably agree with it - assuming it really is read noise dominated. If you took the same setup to a place with big sky glow, or used a faster system or wider filter, you would be doubly impacted in a bad way. Your subs would be much noisier - due to sky glow - and you would not get any benefit from longer subs. All you can hope for is to take many more subs. But - as you say - beyond some number of subs, the stacking just doesn't gain much - so you are just stuck with what you get.

So dark sites are great: your subs look great, and you can make them look even better by going longer. At a bright site - you can't use that trick.

Many people are imaging from brighter sites and/or using fast systems - and they shouldn't feel a need to expose long sub-exposures if it doesn't really help - and in fact can make the end result worse due to other factors. So I just say people should be aware of the noise terms in their imaging so they can make an educated guess of how long they should go - and it all depends.

Frank

#38 Mike Wiles

Mike Wiles

    Viking 1

  • -----
  • Posts: 950
  • Joined: 04 Feb 2009
  • Loc: Goodyear, AZ

Posted 29 January 2013 - 03:41 PM

This thread has given me more to think about all by itself than the last 100 threads I've read put together. Excellent discussion. As I'm likely about to demonstrate, I'm anything but an expert on any of this topic.

In terms of getting a shot noise limited narrowband image....I have everything working against me. I have a slow-ish system (f/7.5) with narrow bandpass filters (3nm) and I image from dark to really dark skies (mag 5.4 at home, 6.5+ in the field). With a measured read noise of 9 electrons my camera isn't overly noisy either. I've done the calculations using test exposures for my own setup and from my magnitude 5.4 backyard at home I'd have to shoot upwards of 5 hour subexposures in order to bury the read noise in the shot noise. From the dark sites that I go to where limiting magnitude is up around 7.5 - there's not enough dark hours in a single day to take a shot noise limited subexposure. So there is some credence to the claim that it's extremely difficult to capture shot noise limited subexposures using narrowband filters. I wouldn't say it's impossible - but extremely difficult and almost certainly impractical in real world conditions. With a 7nm filter on an f/2.8 system in Manhattan...it's possibly do-able. It seems unlikely to me that you're going to get a shot noise limited narrowband image without going 20 or 30 minutes on a subexposure regardless of the equipment configuration.

Having said all that, the S/N ratio of an image is 0 if it's not attempted. As a result, a stack of 300 ten second subexposures will always have a higher S/N than not trying at all. You work with the equipment and environment that you have. To my experience - which is limited compared to many here - narrowband exposures should be as long as you dare to take them based on your equipment, conditions and comfort level. You should also take a significant number of subexposures to gain the maximum benefit from modern statistical data rejection algorithms. I think the point of diminishing returns is somewhere up around 30 subs....but I would agree that 16 is the minimum.

With it being impractical at best to take a shot-noise limited narrowband exposure it seems to me that a decent alternative is to:

  • Create a master bias frame taken from an enormous number of bias subs. This will give a really clean master calibration frame to do as much as possible for reducing read noise.
  • Shoot a lot of subexposures on the light frames. Statistical data rejection works more in your favor - and you're going to need it for random read noise from frame to frame.
  • Dither between subexposures. Any pattern noise and random noise left after the above two steps will disappear for the most part.

Mike

#39 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 29 January 2013 - 03:48 PM

Since most stacking programs "average" the images you might see an improvement in image quality as the new images may be of higher quality than the previous.


I used the term "sum" because most people seem to think of signal as something that changes and intensifies. Really, the signal is more conceptual though and is basically fixed. I guess you could look at it as summing, but relative to the noise. The important part is the SNR. However, mathematically, if we had infinite range, it works out the same whether we sum or average.

To illustrate, if I take 3 images, and the signal is 5 in all and the noise is 1,2,1 respectively, then the sum is 15/4 and the average is 5/1.3, but either way, the ratio (SNR) is 3.75. However, our dynamic range isn't infinite, so, for data storage and processing etc... it becomes easier to average. They key though, is statistically speaking, the signal is really a conceptual idea rather than a physical entity.


Edit: by the way, the above is a simplified example, it is intended to illustrate summation versus averaging, not how noise is actually distributed.

#40 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 29 January 2013 - 04:50 PM

I have everything working against me. I have a slow-ish system (f/7.5) with narrow bandpass filters (3nm) and I image from dark to really dark skies (mag 5.4 at home, 6.5+ in the field). With a measured read noise of 9 electrons my camera isn't overly noisy either. I've done the calculations using test exposures for my own setup and from my magnitude 5.4 backyard at home I'd have to shoot upwards of 5 hour subexposures in order to bury the read noise in the shot noise.



Hi-

I think the main confusing thing about these sub-exposure calculators is it makes people feel *bad* when the *have to* expose a long time. If the problem is that you have a very high read noise camera - that is indeed a bad thing. But if you are "stuck with" narrowband filters and a dark site - those are both *great* things. As long as your read noise isn't really bad, your images will be *much better* than at a bright site where you are "blessed" with short sub-exposures.

So if you are in this situation - where you have a decent camera but slow optics and a dark sky - you will still be doing much better than if you did not have a dark sky. The only downside is that you know that if your camera had lower read noise, or if you had greater aperture, a given total exposure would be slightly better. But that same total exposure at a bright site would be much worse - so be thankful.

I think for most people the general guideline would be to go as long as you can as long as you end up with more than perhaps 5 sub-exposures. If you are using a guidescope on an sct you will probably be limited by flexure and have to keep it short. If you have field rotation - same thing. If your stars saturate - same thing.

But - yes - go as long as you can if the calculations suggest you should go longer - but you shouldn't feel bad about not using the optimal sub-exposure time if it isn't practical.

Frank

#41 Alph

Alph

    Surveyor 1

  • -----
  • Posts: 1755
  • Joined: 23 Nov 2006
  • Loc: Melmac

Posted 29 January 2013 - 05:38 PM

For a great understanding of the benefit of longer images and why they are much better than multiple short images, John has an easy to understand writeup here

http://www.hiddenlof...m/notes/SNR.txt


Mike,
I would recommend The Signal to Noise Connection part I & II articles by Mike Newberry published in CCD Astronomy magazine (long gone) in the summer and the fall of 1994. IMO a must read for everyone.

It is true that pitch dark sky and long exposures might be the only option to capture the dimmest parts of an already very dim object. However there is no science behind the number 16 and stacking a large number of shorter exposures can produce equally good results in many cases.

Here is a quote from the cited article:

If our exposures are background limited because the readout noise is small in comparison to the noise from skyglow, it is usually preferable to stack exposures rather make a single long one.



Mike! Thanks for the pass to the last year's RTMC

#42 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 29 January 2013 - 06:57 PM

I was thinking more about this thread on the way home from work. I realize we really don't define "signal" and "noise" and these aren't actually intuitive concepts. From a statistical perspective, signal is really the theoretical average of some parameter, from some population, given an infinite number of unbiased samples. In this case, i think the population is the photons hitting the photon collector, over the integration time. So, this signal is the theoretical average "rate" of photon arrival, as we have a count/time. The bias is then anything that deviates our ability to detect this rate, in a fixed way, so, this is really the "pattern noise".. For example, there could be gaps between the pixels, so, some photons will miss the collector in a fixed way and there are other fixed errors counting the photons and translating to an image to display etc.... Once we take out the bias, the remaining deviations from the theoretical average rate of photon arrival, are random errors In our detection. If something is random and we take one sample, then that random error will have a large contribution, but if we take another and it isn't there, and average ( or sum ) the two, then it will have a smaller contribution to the total detected population and so on... But the signal is conceptual and is the theoretical average rate of photon arrival and the noise is just everything else that effects a given sampling of this population. So, the signal, being conceptual is always there, it doesn't really "increase"', it's just through various methods, we are able to decrease the noise to "see" the average rate of photon arrival and translate it into a perceivable image.

#43 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 29 January 2013 - 08:51 PM

Be careful about using the word sunn and average. If, for example you chose sum in CCDstack verse Average, your result would be totally different. Adam Block gives excellent examples of the differences in alogarythims in his ccdstack tutorials.

#44 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 29 January 2013 - 10:54 PM

Be careful about using the word sunn and average. If, for example you chose sum in CCDstack verse Average, your result would be totally different. Adam Block gives excellent examples of the differences in alogarythims in his ccdstack tutorials.


Mike, mathematically, this isn't that complicated once you take out all the intimidating terminology. When we're talking about summing or averaging here, we're really just talking about a ratio of fractions I.e.

Edit, I realize I had made this confusing and it may not have actually been obvious what I was getting at, as written. I'm trying not to write out too much math. So, I rewrote this slightly to emphasize better in English. The point, is summing and averaging are just linear functions of the other. If you have a perfect computer system, you can always go back and forth, one is a scaled version prof the other
(sum(xi)/n) = sum(x1)/n + sum(x2)/n .... sum(xn)/n

Where n = the number of exposures. Whether you sum or average, one is just an equivelent linear function of the other and in each case, x is part signal and part noise. The signal part stays constant though and the random part of the noise changes for each exposure. So, when we sum or average them, we have a scaled linear function of the other and if there isn't truncation of the data, through computational limitations, we can always go back and forth, if we know n (the number of exposure)

You can also re-write s/n as (signal mean) / (standard deviation of noise) if you google summation of variance and work your way through it though, you'll find it works out the same, in that one (I.e, summing or averaging) is a scaled function of the other with respect to the ratio.

Again, As i alluded to in the previous post there can be differences because of scaling, rounding, truncation of big numbers, analog to digital scaling/conversion, but these are limitations of the specific hardware and software design used, not any fundamental properties.

#45 David Pavlich

David Pavlich

    Transmographied

  • *****
  • Administrators
  • Posts: 27218
  • Joined: 18 May 2005
  • Loc: Mandeville, LA USA

Posted 29 January 2013 - 11:07 PM

Just to jump into the middle of this, around the end of October or early November, the Advanced Imaging Conference takes place around San Jose, CA. If there's one astroconference you want to attend if you enjoy imaging, this is the one! All the best imagers are there and many of them have workshops. You'll learn a ton of stuff and meet some really great people in the process. And I won't mention the Vendor's Hall. Leave your credit card at home. :grin:

David

#46 mikeschuster

mikeschuster

    Vostok 1

  • -----
  • Posts: 150
  • Joined: 25 Aug 2011
  • Loc: SF Bay area

Posted 30 January 2013 - 12:36 AM

Note also dark current noise. Going longer to overcome read noise means there is even more dark current noise to overcome. In my case at f/5, 3nm Ha, dark skies, 40 minute exposures. Read + dark noise is about 14e-. Typically at 40 minutes my sky background noise is about 20e-. This is *far* from being sky limited. But I can't expose any longer most of the time due to temperature dependent focus changes.
Mike

#47 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 30 January 2013 - 02:29 AM

Dark current is no different from sky glow - at least in terms of the noise model used in these discussions. You can lump all the noises that increase with exposure time - and are independent from the number of sub-exposures used - on one side - and on the other side put read noise - which is independent of total exposure time and *only* depends on the number of sub-exposures (for a given camera). Read noise is what drives the desired number of exposures down - but the other terms don't care.

There are many sources of noise and their impact is reduced by dithering, good flats, master frames, etc. But read noise is special because it doesn't accumulate due to total exposure time, but number of frames. Without read noise, the length of sub-exposure wouldn't matter at all. At least according to these noise models.

For the other noise sources - just increase the total integration time - by longer subs and/or more frames - and the SNR will improve. But other factors not included in the noise model may make the improvement less than expected.

Frank

#48 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 30 January 2013 - 02:52 AM

The bias is then anything that deviates our ability to detect this rate, in a fixed way, so, this is really the "pattern noise"..


I think this is a more subtle issue and is something that is consistently handled very badly in typical amateur level write ups of ccd imaging. The only purpose of a master bias and master dark is to remove the fixed pattern noise from those two terms. A dark frame will look noisy and ugly - but if you look at many such frames, the noise looks similar. The goal is to take a really clear "picture" of that pattern noise and subtract it - and it works very well to reduce the overall noise in a sub-exposure.

Bias frames also have pattern noise, but a big reason for bias frames is to set the zero of the photon signal so the calibration process is linear. Even if the bias frames were perfectly flat at a level of 1000, with no noise, you would need to subtract them (if you are using flats and you don't have flat darks).

But a single dark frame will contain the pattern noise in the dark current - plus Poisson noise also - and that Poisson noise cannot be reduced by subtracting the master dark. But there is Poisson noise in the sky glow also, and in the nebulosity signal you are trying to image. And if you expose long enough, the SNR of the nebulosity against the other noise terms will increase and you will see improvement in the image.

Frank

#49 vpcirc

vpcirc

    Soyuz

  • -----
  • Posts: 3963
  • Joined: 09 Dec 2009
  • Loc: Merced CA

Posted 30 January 2013 - 03:31 AM

I wish we could get Adam Block to dispel all the science fact from science fiction on here. He is a true wiz at this (just watch one of his tutorials). I think the average beginner to intermediate user is now totally confused with all the "theory" being spread here. David has the perfect solution attend AIC and listen to the John Smith and go tell him he doesn't know what he's talking about! Hey let's call the Myth Busters!

#50 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3894
  • Joined: 12 Oct 2007

Posted 30 January 2013 - 03:46 AM

"I wish we could get Adam Block to dispel all the science fact from science fiction on here. He is a true wiz at this (just watch one of his tutorials). I think the average beginner to intermediate user is now totally confused with all the "theory" being spread here. David has the perfect solution attend AIC and listen to the John Smith and go tell him he doesn't know what he's talking about! Hey let's call the Myth Busters!"

If you feel you understand this material and see flaws in what I'm saying - feel free to point out specific items of disagreement - preferably with explicit references. But once again - you are just casting aspersions without any actual content and disrupting what appears to be a fruitful discussion among people who are interested in learning and discussing the subject.

Moderators?

Frank






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics