•

# 5 or 30 minute exposures for narrowband?

239 replies to this topic

### #76 shams42

shams42

Apollo

• Posts: 1328
• Joined: 05 Jan 2009
• Loc: Kingsport, TN

Posted 30 January 2013 - 03:23 PM

You have to distinguish between noise and unwanted signal.

The definition of noise is unwanted signal. In statistics there is random noise and bias ("pattern noise"). Sorry, this is semantic, but if we're not speaking the same language we're going to misunderstand each other...

When I say "signal", I mean that aspect that is fixed, non-stochastic, always the same each time it is measured.

When I say "noise", I mean that aspect that is random, the variable and unpredictable aspect, different each time it is measured.

The wanted signal comes from stars and DSO. Unwanted signal comes from the CCD and camera electronics. Among other sources.

Each signal brings with it shot noise. Shot noise from the object, shot noise from the background, etc. Read noise not a "tag-along" shot noise but errors that occur when reading the CCD.

Are you using the terms differently?

I hear lots of people saying that they "use darks to remove the thermal noise" or that "bias frames remove read noise." No -- bias and darks can remove certain aspects of unwanted signal but not noise. Distinguishing between true noise and unwanted signal is clarifying IMO.

### #77 pfile

pfile

Soyuz

• Posts: 3520
• Joined: 14 Jun 2009

Posted 30 January 2013 - 03:54 PM

everyone should own a copy of HAIP. reading the first few chapters of that book would answer all the above questions.

### #78 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 30 January 2013 - 04:00 PM

Are you using the terms differently?

Signal
http://www.statistic...ary&term_id=844

Noise
http://www.statistic...ary&term_id=805

Bias
http://www.statistic...ary&term_id=717

Really though, I think the "signal" definition given above, is bit vague, without considering the broader context.
In practice, a more useful way of defining, in the context of AP, I think, would be to say "it is the asymptotic mean rate of photons hitting the collector, given an infinite number of unbiased samples, collected over the integration time." The estimator of this will deviate due to noise and bias as defined above.

Usually we define noise more as anything unwanted, but random and bias as anything unwanted and not random.

### #79 Peter in Reno

Peter in Reno

Cosmos

• Posts: 8149
• Joined: 15 Jul 2008
• Loc: Reno, NV

Posted 30 January 2013 - 04:10 PM

Hi Mike,

Thanks for uploading the images. I think your NB images look similar to my NB images and they appear to be normal to me. If you took enough subs, do proper calibration of each sub, and stack them, the end result will look much better.

I think it's tricky to evaluate a single NB sub because the signal is very weak. When stretching a single NB sub heavily to reveal DSO, it appears to look bad because more low ADU bad pixels show up. Calibration and stacking will take care of this.

My bottom line is I think your subs look normal to me.

Peter

### #80 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 30 January 2013 - 04:33 PM

everyone should own a copy of HAIP. reading the first few chapters of that book would answer all the above questions.

Thanks! I've been out of the hobby for a while though and am not familiar with this. What does it stand for?

### #81 Mike7Mak

Mike7Mak

Apollo

• Posts: 1277
• Joined: 07 Dec 2011
• Loc: New York

Posted 30 January 2013 - 05:43 PM

everyone should own a copy of HAIP. reading the first few chapters of that book would answer all the above questions.

Thanks! I've been out of the hobby for a while though and am not familiar with this. What does it stand for?

That would be this book...

http://www.willbell.com/aip/index.htm

...which also includes a fairly complex image processing program, the poor man's Pixinsight, if you will. (Never actually seen PixInsight so I'm sorta guessing on that.) Whether you dive into the program or not the book is well worth reading.

The problem, for me anyway, is both the book and the program rely extensively on math that quickly makes my eyes glaze over. You can skim the equations in the book and still learn a lot but the program itself expects you to know the math functions and how to enter three decimal place numerical parameters for all but the most basic processing steps.

I think I gain more real understanding from plain language conceptual discussions like this one.

Thanks, btw, for everyone's contributions to this topic, this 'beginner/intermediate' imager is less confused as a result.

### #82 freestar8n

freestar8n

Fly Me to the Moon

• Posts: 5191
• Joined: 12 Oct 2007

Posted 30 January 2013 - 06:33 PM

I hear lots of people saying that they "use darks to remove the thermal noise" or that "bias frames remove read noise." No -- bias and darks can remove certain aspects of unwanted signal but not noise. Distinguishing between true noise and unwanted signal is clarifying IMO.

Actually this interpretation of "noise" and "signal" is where I depart from typical amateur write ups and instead stick to the usage more commonly found in professional astronomical and engineering literature. I often hear amateurs insist that "noise must be random" - but I don't know where that idea comes from - and I think it makes it very hard to talk about - and understand - why people do all this calibration stuff to make their images look better.

I'm aware that many amateur web sites and even some amateur books on ccd imaging will use that "random" interpretation of noise - but I don't place much value in that if it is at odds with textbooks and journal articles in the field. Even worse - it goes against what I consider to be common sense.

I could cite many examples but here are two:

"Fixed pattern noise is removed from images by a technique called flat fielding, where a computer adjusts pixel sensitivites to be equal." Photon Transfer, James Janesick, SPIE 2007.

That is a casual summary of flat fielding in a well-regarded graduate level text on imaging sensors that has no problem talking about the removal of fixed pattern noise.

Many texts on noise and statistics don't even bother defining what noise is - because it all depends on context. My example is, if you are on the couch watching "the game" and someone starts describing a chore you should do - the game is the signal and the voice is noise. If instead the game is boring and the voice is offering a beverage - the voice becomes signal and the game is noise. Nothing about randomness here - just a thing that you want is being obfuscated by a thing you don't want.

One text that I like is Probability, Statistical Optics, and Data Testing by Frieden, in which he does attempt to give "A Definition of Noise." To me, he sums it up nicely and in a way that applies directly to the removal of pattern noise:

"This is a definition of noise which also shows its arbitrariness: the received message that is independent of one set of events may be dependent upon and describe a different set. For the latter events, the messages are not noise and do contain finite information."

Later, he says:

"The concept of noise is always defined in a specific context. As a consequence, what is considered noise in one case may be considered "signal" in another. One man's weed is another man's wildflower."

So - why do you want to subtract a master dark? To reduce the noise in it - and that noise is Fixed Pattern Noise. Flats are applied to correct vignetting - but also to make the pixel response more uniform - and reduce the noise.

So - sure - you can reduce and even remove noise, and noise need not be random. That's why there are terms like "random noise" and "noise reduction." And that's why images look better after a dark subtraction.

Frank

### #83 vpcirc

vpcirc

Fly Me to the Moon

• Posts: 5118
• Joined: 09 Dec 2009

Posted 30 January 2013 - 06:38 PM

We need Richard Crisp to chime in, he's a CCD engineer.

### #84 korborh

korborh

Apollo

• Posts: 1097
• Joined: 29 Jan 2011
• Loc: Arizona

Posted 30 January 2013 - 07:03 PM

I often hear amateurs insist that "noise must be random" - but I don't know where that idea comes from

I believe this confusion has been disseminated by one very popular book in amateur CCD imaging. Never heard this association of noise with randomness in all my years in EE.
Noise is unwanted signal, random or otherwise.

### #85 Mike7Mak

Mike7Mak

Apollo

• Posts: 1277
• Joined: 07 Dec 2011
• Loc: New York

Posted 30 January 2013 - 07:11 PM

We need Richard Crisp to chime in, he's a CCD engineer.

Good grief, enough with the name-dropping already, it's becoming embarrassing. Address the topic.

### #86 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 30 January 2013 - 07:25 PM

I often hear amateurs insist that "noise must be random"

Many texts on noise and statistics don't even bother defining what noise is

Interdiscipline terminology is always confusing. In population statistics, we do define noise, but call it "random error" which for all intents and purposes is really synonymous with "variance". Technically, you could have systematic error, but if this still exists once we get into sampling (which we always get into) then it become "bias". So, really we usually just talk about random error/variance and bias. Generally I do not think we define "noise" though unless trying to communicate with other disciplines such as engineering LOL. Really, I've never heard a coworker mention"noise" except casually. When we do define noise though, it really is random error/variance. So, from a population stats view (and sampling stats usually I think), in practice, noise is really random.

### #87 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 30 January 2013 - 07:26 PM

That would be this book...

http://www.willbell.com/aip/index.htm

...which also includes a fairly complex image processing program, the poor man's Pixinsight, if you will. (Never actually seen PixInsight so I'm sorta guessing on that.) Whether you dive into the program or not the book is well worth reading.

The problem, for me anyway, is both the book and the program rely extensively on math that quickly makes my eyes glaze over. You can skim the equations in the book and still learn a lot but the program itself expects you to know the math functions and how to enter three decimal place numerical parameters for all but the most basic processing steps.

I think I gain more real understanding from plain language conceptual discussions like this one.

Thanks, btw, for everyone's contributions to this topic, this 'beginner/intermediate' imager is less confused as a result.

Thanks

### #88 mikeschuster

mikeschuster

Vostok 1

• Posts: 150
• Joined: 25 Aug 2011
• Loc: SF Bay area

Posted 30 January 2013 - 08:24 PM

FPN definitely is interesting. If you walk pixel by pixel across the detector, pixel sensitivities do change, and the change does seem "random", like "noise". But on the other hand these sensitivities remain "fixed" across time, so it is also "signal" that can be removed. IMO very strange. No wonder it is a source of confusion.
Mike

### #89 vpcirc

vpcirc

Fly Me to the Moon

• Posts: 5118
• Joined: 09 Dec 2009

Posted 30 January 2013 - 10:07 PM

We need Richard Crisp to chime in, he's a CCD engineer.

Good grief, enough with the name-dropping already, it's becoming embarrassing. Address the topic.

I was kidding... This conversation has gotten well beyond my level of interest. I'm neither an engineer or science expert. I'll stick with the guys who make the prettiest images tell me. Longer is better from what I've learned, but people are free to image how ever they want and if they're happy with the results that's all that really matters!

### #90 orlyandico

orlyandico

Fly Me to the Moon

• Posts: 7397
• Joined: 10 Aug 2009
• Loc: Singapore

Posted 31 January 2013 - 01:09 AM

i just did some 20 minute narrowband subs last night.

compared them to last year's effort on the same subject (horsehead) where i used 10 minute subs.

the 20 minute ones are definitely cleaner.

### #91 freestar8n

freestar8n

Fly Me to the Moon

• Posts: 5191
• Joined: 12 Oct 2007

Posted 31 January 2013 - 03:00 AM

the 20 minute ones are definitely cleaner.

Hi-

Just to be clear - even if you didn't have any read noise, individual subs at 20 would look better than 10 because SNR increases with total exposure time. In terms of asking "how long should my subs be for a 3 hour total exposure time" you would need to compare the stacked versions of both images with the same total time. That is what all these "sub exposure calculators" do - or many of them anyway.

For you, or anyone, the benefit of 20 compared to 10 may be big, or negligible - it all depends on how the read noise in each sub compares to the other noise terms. Dark site, high read noise camera, slow optics - all point to expecting a win going to 20, but I don't know what you used or where. If you did both imaging sessions with the same equipment it would be nice to compare the two final results with the same total time.

If you used a fast system like hyperstar and compared a 10 minute sub to a 20 minute one - both may look great already - as long as they didn't saturate - because with such a fast system even a single sub could achieve high snr.

Frank

### #92 freestar8n

freestar8n

Fly Me to the Moon

• Posts: 5191
• Joined: 12 Oct 2007

Posted 31 January 2013 - 03:15 AM

So, from a population stats view (and sampling stats usually I think), in practice, noise is really random.

Yes - I can believe that some disciplines would want a term like noise to be used with a specific meaning. Same for a word like, "significance." But in physics and engineering it is usually used more casually and in context. But in some ways fixed pattern noise *is* random, or it can be. It is random (ish) across the pixels - but repeats in time. If a random sequence appears once it looks totally random. But if it repeats then you can learn what it is and remove it. That is the idea behind capturing a master dark and subracting it.

But my main point is that graduate level texts on ccd's and image processing, and journal articles, refer to pattern noise removal - and I go by that. Plus, I think it helps people understand why the image improves after dark subtract. If you tell people the image looks better but the noise hasn't actually decreased - I think that is confusing.

On this topic, the master bias usually has an ugly pattern to it - and that ugly pattern can be subtracted away from each sub, but a random Poisson noise will remain - and that is the read noise that you are stuck with in each calibrated sub.

Frank

### #93 vpcirc

vpcirc

Fly Me to the Moon

• Posts: 5118
• Joined: 09 Dec 2009

Posted 31 January 2013 - 06:00 AM

i just did some 20 minute narrowband subs last night.

compared them to last year's effort on the same subject (horsehead) where i used 10 minute subs.

the 20 minute ones are definitely cleaner.

That's exactly what the a U.S. filter manufacture told me would happen!

### #94 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 31 January 2013 - 08:11 AM

I'll stick with the guys who make the prettiest images tell me.

Just to point out, there is a difference between "prettiest" and "most accurate". I think most would agree that longer is better, as long as conditions, equipment and various properties of the target, such as brightness, allow. The issues was saying that you "need" to go longer. From the website you posted, they showed under broadband conditions that, you can go shorter, without much extra data and this may be preferable even. They did not give a dark site example, there doesn't seem to be anything in the formulas that suggests you "need" to go longer, it does suggest you can go longer. If someone has some data from a dark site, it would be fun to plug in though. And certainly if there are experts saying you "need" to go longer, it would be great to hear from them.

Anyways, then the subtle issue is that from a statistical point of view, I think the formulas should change anyways, if pattern noise induced by reading data off the ccd becomes low enough. Also to point out, that could happen via technology, or via extreme read bias calibration, such as hundreds, thousands, or even tens of thousands of bias frames. That would be an interesting experiment, but if it is fixed pattern noise, should be pretty effective, when combined with a large amount of image capture exposures.

### #95 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 31 January 2013 - 08:21 AM

But my main point is that graduate level texts on ccd's and image processing, and journal articles, refer to pattern noise removal - and I go by that. Plus, I think it helps people understand why the image improves after dark subtract. If you tell people the image looks better but the noise hasn't actually decreased - I think that is confusing.

I agree, with the caviar of the next point below.

On this topic, the master bias usually has an ugly pattern to it - and that ugly pattern can be subtracted away from each sub, but a random Poisson noise will remain - and that is the read noise that you are stuck with in each calibrated sub.

Frank

The reason we don't really worry about pattern noise, is because as the samples reach asymptotic convergence, the data becomes more normally distributed and further, averages out to zero. It is only really an issue for individual samples, not averages of a large number of samples. So, if you can effectivly remove bias from the sampling technique, then by all means shoot more, smaller images, you will actually get better pictures. We probably aren't at that point yet, but that may be changing with technology and could perhapse be sped up with more aggressive calibration techniques. As we are already seeing 3e- cameras on the hobby market, starting at \$1000, that suggests we could be heading in the direction of a paradigm shift (or expansion as more noises cameras are probably going to stick around too).

Edit: note fixed "3e-"

### #96 vpcirc

vpcirc

Fly Me to the Moon

• Posts: 5118
• Joined: 09 Dec 2009

Posted 31 January 2013 - 08:28 AM

I'll repeat what I was told. "Shorter exposures will not pick up the faint nebulosity, if you have a doubt, compare equal amounts of time on the horsehead and see what happens" this is referring to narrowband. I don't think most top imagers get involved in CN forum discussions. I can only speak for myself and listening to the advice I received they were correct. I'm sure it's just my camera and my situation and in no way applies to anyone else. In my case I need to waste my time shooting longer narrowband images. I'm sure many others don't need to undertake such a frivolous approach.

### #97 Inverted

Inverted

Mariner 2

• Posts: 212
• Joined: 19 Jan 2013
• Loc: LP Land

Posted 31 January 2013 - 08:50 AM

I'll repeat what I was told. "Shorter exposures will not pick up the faint nebulosity, if you have a doubt, compare equal amounts of time on the horsehead and see what happens"

Of course, but It's because we can't see it, until the SNR is high enough, not because it isn't actually there is a short exposure. Photons are hitting the sensor very quickly, so, there is a signal even in a super short exposure. The SNR though may be near zero though. The formulas on the website you linked, seem to agree that we just need to expose long enough to get the weakest signal over the noise pedestal, sufficiently to be discernible from the noise. With a theoretical low enough noise pedestal, that could be a fraction of a second, in practice it is clearly a lot longer due to relatively high noise levels of our equipment....

### #98 freestar8n

freestar8n

Fly Me to the Moon

• Posts: 5191
• Joined: 12 Oct 2007

Posted 31 January 2013 - 10:43 AM

Not really. The message of the web sites and calculators is to expose long enough for other noise sources in the sub to grow larger than the read noise in the sub. If the sub exposure is noisy due to sky glow then longer subs won't help at all. All you can do is accumulate more total exposure time. But if your sub exposure is noisy due mainly to read noise, then longer subs will help.

That is what the various web sites and calculators are trying to convey - and they treat read noise specially. You can't just tell by how noisy a sub is that it should have been longer - unless you know the noise you see is due in large part to read noise.

That's why people should know the gain, read noise, and sky glow of their equipment and site - so they can plug in the numbers. If the numbers end up being longer than practical, just go shorter and accumulate as many subs as you can.

Frank

### #99 Peter in Reno

Peter in Reno

Cosmos

• Posts: 8149
• Joined: 15 Jul 2008
• Loc: Reno, NV

Posted 31 January 2013 - 10:57 AM

This has been an interesting thread. It gave me some ideas of how long each sub exposure is too long. Remember this thread is about Narrowband, not broadband. NB filters are insanely narrow that does a great job of blocking unwanted signal (light pollution and moon). Forget about read noise or any other noise to worry about. I pretty much understand all this noise talks in this thread so let's forget about noise for now.

I've been imaging with OSC camera for three years before migrating to Atik 460EX mono last September. My new mono camera has extemely low read noise. Last September/October I captured Bubble Nebula with LRGB, Ha, Oiii and Sii filters which took me two months due to bad weather. I went back to look at the data. I looked at the histogram for each filter to see where the left edge of the graph is. Luminance was the worst because the left side of the graph shifted too far to the right. That tells me the exposure was too long due to heavy light pollution. RGB were much better but the left side was shifted a little bit too far to the right. All LRGB sub-exposures were at 10 minutes each. This will give me ideas of maximum sub-exposures for each filter in my area.

NB filters are a different ball game. I started with Ha at 15 minutes and the left side of histogram was almost all the way to the left. The minimum ADU was in between 200 and 300 and pretty much the same as minimum ADU of Bias. Then Oiii was captured at 15 minutes subs and once again the left side of graph was like Ha. Finally I captured Sii at 30 minutes and the left side of histogram remained the same as Ha and Oiii. It tells me that NB filters allows you to image reeeeeeaaaaaallllly long sub-exposures especially under heavy light pollution where I live.

So I believe the bottom line is to pay attention to histogram to make sure the left side of the graph didn't shift too far to the right. The more it shifts to the right the greater the sky glow you have. You should know camera's minimum ADU of your bias. So if your camera's Bias minimum ADU is 1500 and the minimum background of your light sub is around 1500 to 1700 (wild guess), then you may be okay with the sub-exposures. Keep taking longer sub-exposures until the left side of the histogram graph starts to shift too far to the right.

Once you have established the maximum sub-exposure times for LRGB filters, then use this and take as many subs as you can. For NB, take as long as your mount can handle and take as many subs as well. Low noise cameras are nice for NB imaging. My camera's (Sony CCD) minimum bias ADU is between 200 and 300. Kodak CCD minimum Bias ADU can be pretty high of around 1500. NB imaging with high read noise can be difficult.

If you have a better or easier suggestion to determine the maximum sub-exposure times before it hits the sky noise, please feel free to say it. I don't think you have to worry about the maximum sub-exposures for NB filters because they do a fantastic job of blocking unwanted signals like light pollution or moon light. Probably maximum bandwidth for NB filters should be 5nm. 3nm filters are nice but quite a bit more expensive than 5nm.

See my Bubble Nebula images in "Peter's Galleries" in my signature. Look under "Nebulae". It shows NB, Bi-color, and RGB images of Bubble Nebula.

Peter

### #100 shams42

shams42

Apollo

• Posts: 1328
• Joined: 05 Jan 2009
• Loc: Kingsport, TN

Posted 31 January 2013 - 11:00 AM

Yes - I can believe that some disciplines would want a term like noise to be used with a specific meaning. Same for a word like, "significance." But in physics and engineering it is usually used more casually and in context. But in some ways fixed pattern noise *is* random, or it can be. It is random (ish) across the pixels - but repeats in time. If a random sequence appears once it looks totally random. But if it repeats then you can learn what it is and remove it. That is the idea behind capturing a master dark and subracting it.

Very interesting point, Frank. I had been considering only the temporal aspect -- the fixed pattern -- not the positional or pixel-to-pixel variability that does indeed look "random."

But my main point is that graduate level texts on ccd's and image processing, and journal articles, refer to pattern noise removal - and I go by that. Plus, I think it helps people understand why the image improves after dark subtract. If you tell people the image looks better but the noise hasn't actually decreased - I think that is confusing.

See, my position is that this is MORE confusing that attempting to distinguish between "unwanted signal" and "random noise" as I have proposed.

The fact is that people who own CCDs with very low dark current (most Sony CCDs) should NOT calibrate by subtracting a master dark. Why? Because although subtracting the master dark may improve the aesthetic quality of the image by removing hot pixels, the random noise in the calibrated sub actually increases! Instead they should use bad pixel mapping, an alternative technique which can remove hot pixels without adding noise.

It's that seeming paradox that I found so confusing -- that by subtracting noise (what I call "unwanted signal"), you actully add noise.

On this topic, the master bias usually has an ugly pattern to it - and that ugly pattern can be subtracted away from each sub, but a random Poisson noise will remain - and that is the read noise that you are stuck with in each calibrated sub.

But here, aren't you distinguishing "noise" from "random Poisson noise?" So in practice, it sounds like you too make some distinction between systematic or repeatable sources of noise which can be removed via calibration and random noise which cannot. We only seem to differ in that I call the former "unwanted signal."

I think this is largely semantics, and I will no longer belabor the point. I respect your desire to maintain consistency with professional references and discussion regarding CCDs, and there is obvious value in doing so.

I just think that defining our terms such that subtracting noise can add it makes the matter more confusing than it needs to be.

## CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

 Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics