Jump to content

  •  

* * * * *

Signal to Noise: Understanding it, Measuring it, and Improving it (Part 1)


Discuss this article in our forums

Signal to Noise: Understanding it, Measuring it, and Improving it (Part 1)


You've probably heard it before and if you continue to read my columns here, you'll hear it a hundred more times -- astrophotography is all about signal-to-noise ratios (SNR). But, what does that mean and can such a blanket statement be true? I mean, really. It's all about SNR? What about aperture? What about f-ratio? What about Camera X being better than Camera Y? What about monochrome vs. color cameras? What about cooled vs. uncooled and dedicated astro-cams vs. DSLRs? What about refractors vs. reflectors? What about dark skies vs. urban skies? Well, my answer to all of those is that yes, they all matter but that they can all be distilled down to some form of SNR (albeit sometimes a bit more complex than just a single SNR number).


So, to kick off this column, we're going to talk about SNR, what affects it, and how to measure it. Believe it or not, you can do a lot of tests on your camera and come up with clear, precise, repeatable results. For equipment, you need your camera, a lenscap, and a stack of paper. Optional accessories include a piece of tin foil, a rubber band, and an SLR lens or your telescope (small refractors work nicely here, although aren't as handy as an SLR lens). You don't need fancy software either. While I use Matlab for my tests, you can use the freeware program ImageJ and arrive at the exact same results. Honestly, even if you're a Luddite, you can do it. Before you run the tests, though, you should have some notion of what SNR is and what the various sources of noise are in our images. Knowing that, and knowing how your camera behaves will let you figure out how to get the best images and make the most out of your time under the stars.


In this installment, we'll cover the basic question: What is SNR? Next time, we'll cover how to measure some of the various kinds of noise in your camera. After that, we'll cover some implications for all this on how you should go about taking your images.


SNR in a Perfect World

When we take an image of something, be it M51 or a picture of a child, photons pass through the objective (lens or mirror) and hit the sensor array (CCD or CMOS). The array has a whole bunch of individual sensors (that make the pixels) and each one does the same thing. Each one tries to determine how many photons struck it. This starts off as an analog signal (e.g., a voltage) and gets converted into a number by something called an analog-to-digital converter (ADC). Suppose we're taking an image of a dark gray bar and a light gray bar. We might capture 1,000 photons from the dark gray bar and 2,000 photons from the light gray bar during the time when the shutter is open and photons are allowed to pass through the objective and to the sensor. In a perfect world, each time we did this, we'd get the same 1,000 and 2,000 photons and each time this might read out intensity values of 1,000 and 2,000 in our image. Each time we did this and each pixel that's aimed at the bars would end up with these values. In this perfect world, we have signals of 1,000 and 2,000 and we have no noise at all. There is no variability here from pixel to pixel or from image to image which means that we have no noise.


SNR stands for Signal to Noise Ratio and is simply the ratio of the signal (or, to be picky, often the signal plus noise) to the noise. So, we have 1,000 / 0 for the SNR on the dark gray bars and 2,000 / 0 for the SNR on the light gray bars. In both cases, the SNR is infinite. We have a perfect image in this perfect world.


Noise is a Reality

Reality is far from perfect, however, and you will always have noise. In fact, you'll always have several types of noise. We've got read noise, dark current noise, and shot noise (both target and skyglow variants) to name a few. Each of these is going to conspire to keep you from bringing out that faint bit of wispy nebulosity from the background. Before we cover these, though, it's worth going over a few visual examples of SNR. To make these examples, I used the wonderful, ultra-deep shot of M51 from the Hubble Space Telescope, removing the color, resizing, and cropping to create a very nice, high-SNR image.


Original image

Image with 5% Gaussian noise added

Image with 10% Gaussain noise added



On the top, we have the original image and below we have two images in which I've added Gaussian noise. The first one has 5% noise added (using PhotoShop) and the second has 10%. It doesn't take much to see that we'd rather have the top image than the middle and rather have the middle than the bottom. (Although, in truth, if any of my individual frames of M51 looked like the bottom one, I'd be a happy camper!).


We can even use PhotoShop (or ImageJ) to figure the SNR in parts of this image. I grabbed a small portion of even gray there about centered vertically and about 3/4ths of the way to the right and used PhotoShop to measured the mean signal and standard deviation. The mean is just the average and the standard deviation is a measure of the noise (how much the sampled pixels tend to vary about that mean). In the top image, the mean was 85 and the standard deviation was 6. That makes for an SNR in that area of 14.17 (85 / 6). In the middle image, it was 84.8 and 13.9 for an SNR of 6.10 and in the bottom image it was 84.5 and 26.6 for an SNR of 3.18.


Note that in each of these, the signal is the same. That is, the intensity of the image hasn't changed (it's about 85). What has changed is the amount of noise (the amount of variability around that mean). What happens if we cut the signal down? Well, if we cut the signal in half and add 5% noise like the middle example, we'll end up with something that looks dim and noisy. It'll look worse than the 5% image for sure and some might think it's worse than the 10% image. After all, it's pretty dim. Here's that image.


The mean here is now 42.5 (half of the 85 it was before) and the standard deviation is now 13. The SNR then is 3.27 - almost exactly the same SNR we had in the 10% noise case above. Why? This looks like a crummy image and it looks dim enough that many would do things like increase the gain or ISO on their cameras to boost it up and make it brighter. But, the truth is, it's just as good an image as the bright 10% noise case above.


Here is that same image, stretched to restore the image intensity (multiplied by 2).

If you compare this to the original version with 10% noise, you'll be hard-pressed to tell them apart. Sure, the precise pattern of noise here is different, but to the eye (and according to the math), they're got the same SNR. The intensity in that region is now 85 and the standard deviation is now 25.8 for an SNR of 3.29. (Note: Don't be bothered by the slightly different numbers here like 3.27 vs. 3.29. These are just rounding errors.)


Note: Here, I've been calculating SNR just by that simple ratio. You'll often see it expressed in dB (decibels). To calculate things in dB, just just take 20 * log10 (ratio). So, if we had 2,000 for our signal and 2 for our noise we'd have 20*log10 (2000/2) or 60 dB.

Upshot: SNR

The upshot here is to remember that SNR has two components: the signal and the noise. If we keep the signal constant and halve the noise, we double our SNR. If we keep the noise constant and double the signal, we double the SNR. If we halve the signal but cut the noise to a quarter of what it was, we also double the SNR. It's a ratio and getting better SNR can come down to either boosting the signal or dropping the noise.


Types of Noise

Read Noise

Every time you read an image off the camera, some noise is going to be injected into the image. Even if there is no signal (no light, no appreciable dark current), you're still going to have noise in the image (it won't be perfectly even). This noise is called read noise and it comes from both the electronics on the sensor itself and from the electronics inside your camera.


Read noise comes in several flavors. Ideally, the noise has no structure to it. That is, there is no fixed pattern to it and it just an even level of background, Gaussian noise. If you were to imagine this as sound, it would be a simple hiss without any tones, clicks, pops, etc. Visually, this kind of noise is easy to overlook and when images with simple, Gaussian noise are stacked, the noise level is reduced (by a factor of the square root of the number of stacked images).


Many cameras aren't ideal, however. Some have a fixed, spatial pattern to the noise. Every time you take an image, there is a clear pattern of streaks, waves, etc. in the image. If you're going to have something sub-optimal, this is what you'd like to have as it's entirely repeatable. Yes, it will be in every light frame you take, but it will also be in every dark frame or bias frame you take, so our standard pre-processing steps can remove it.


Other cameras have noise that is at a certain frequency in the image (e.g., vertical streaks that are 100 pixels apart), but with the position varying from image to image. This is a lot tougher to deal with as each frame will have the noise pattern in it, but no two frames will be alike. So, the same pixels won't be affected in your light frames and your dark frames, making standard pre-processing ineffective (it won't subtract out). With this kind of noise, all you can really do is either try to filter it out later or try to stack enough frames so that the noise will end up being even across the final image (each frame injects the noise somewhere else). Unfortunately, that can often require a lot of frames to do. Worse still, since your darks and biases have these same issues, you will need a lot of darks or biases to make that pattern disappear. If it doesn't you'll end up injecting noise into your lights when you subtract out your darks or biases.


Finally, one other kind of noise should be touched upon here. Salt-and-pepper noise looks like bright and dark spots that are scattered across your image. These spots tend to vary from image to image (we're not talking about hot pixels here). Some cameras have real issues with these and if your camera does, you'll need to take measures to remove them. Stacking based on the standard deviation across images (aka sigma-clip) can be very helpful.



Shot Noise

If you're a doctor on the battlegrounds of Prussia in the late 1800's and wondering when you'll see the next person come in having been kicked by a horse you have something in common with an astrophotographer wondering (albeit very quickly) when the next photon from that galaxy will arrive at the sensor. Both of you are thinking about Poisson processes. Well, you're probably not overtly doing this, but a guy named Bortkiewicz was. Sometimes he'd see one come in in the morning and another that afternoon and other times he'd go days without seeing any. The point is that there is some likelihood of an event (getting kicked by a horse or having a photon from a galaxy make it to your sensor) and since it's just a probability and not a certainty, there will be some variation in how long it is between events. That variation was described back in the early 1800's by the French mathematician Poisson.

If you take an image for a second, you might get 10 photons from that DSO to hit a pixel on your sensor. Take another and you might get 12. Take another and you might get 9. Because of the way light is, we have this variability. This variability is called shot noise (also called photon noise) and it follows that Poisson distribution.

You're never going to escape shot noise and the amount of noise you have goes up with the square root of the intensity. So, the brighter the object, the more noise you have. That sounds bad, but in truth, it's really not a problem as the signal went up as well. So, if you have 100 photons hitting your sensor in a shot in one case and 10 photons hitting in another case, your SNR is over 3x higher with the greater number of photons. Despite having higher noise, the higher signal more than overcame this (SNR would be N/sqrt(N)).

So we can ignore shot noise, right? Wrong. There are two sources of shot noise. We don't mind the shot noise from the DSO itself, but what about the shot noise from the skyglow? Your sensor doesn't know if the photons are streaming in from a galaxy or from the skyglow. Both have shot noise. This is what is so evil about skyglow. If it just brightened the image, all we would need to do is to slide our black point up and cut off the skyglow. But, skyglow lowers the SNR by injecting shot noise into the image without also injecting signal into the image. That's one of the key reasons (that and not lowering the effective dynamic range of your chip) why skyglow is so harmful.



Dark Noise

Block all light from hitting your sensor and take images of varying exposure durations and you'll notice that the image gets brighter with increasing exposure. This results from dark current. The intensity should double as you double the exposure duration and it should also double for every 6 degrees Centigrade or so. You'll also find that some pixels brighten faster than others (hot pixels brighten very quickly), leading to a pattern of fixed, spatial noise. This is why we typically take dark frames (using the same exposure duration and temperature so that the pattern in our darks is the same as the pattern in the lights - meaning, we can subtract one from the other to arrive at a cleaner light frame.)

Most all of you reading this will know about dark frames, but some number of you may not have considered one real implication of dark current. Remember shot noise? Since photons arrive in a random process, we don't really know exactly how many we'll get and that the variance in the actual count is proportional to the signal level? The same holds true for dark current. The higher the dark current, the more variability there is in our reading in an individual frame. Therefore, if we want a very good estimate of the average dark current (so we can subtract this expected value of the dark current from our lights), we need even more dark frames to average together.

Thus, the answer to the question, "How many dark frames should I use?" depends on your dark current. If you've got a lot of it and your image gets considerably brighter and noisier as you take longer dark frames, you're going to need more darks. If you don't collect enough darks, you're going go inject noise into the light frames when you pre-process the images. Any deviation in your dark stack from that expected value of the dark current for each pixel means you'll be injecting that difference into each of your lights.



Quantization Error

When we read a voltage off of a sensor, it's an analog signal. We then turn this into a number using an analog-digital-converter (ADC). Suppose you have an 8-bit ADC. With this setup, you can record 256 shades of intensity (2 raised to the 8th power). Now, suppose further that your CCD can store 10,000 electrons before it saturates. If you want to use the full dynamic range of the CCD, you must setup the ADC so that each integer in the scale here represents about 25 photons. So, a 10 coming off the ADC would mean that there were about 250 photons captured and an 11 would mean there there were about 275.

It doesn't take much to see the problem here. You can no longer tell the difference between 250 photons and 251 or 255 or 260 photons. They all get the same value. That problem is called quantization error and it turns similar, but not identical intensity values (e.g., the nice subtle differences in shading in that galaxy arm you hope to pull apart) into the same value. Once it's the same, information is lost and there's no pulling it apart (at least not in one frame).

Quantization error comes about when you've got more levels of intensity to store than you've got numbers to store them in. These days, dedicated astro-cams don't suffer from quantization error, but DSLRs and lunar / planetary cams still can. For example, if you've got a CCD that has a full-well capacity of 10,000 electrons and you've got a 12-bit ADC that gives you 4,096 intensity values, you've potentially got a problem. Here, the system gain would have to be about 2.5 e-/ADU if you wanted to use the full range of the CCD. That is, each intensity step (Analog Digital Unit) would represent 2.5 electrons. Alternatively, you could potentially set the system gain to 1 e-/ADU and trade off dynamic range to get you out of the quantization error. You'd saturate the ADC at 4,096 electrons, but you'd not have any quantization error to worry about. Cameras that let you vary the system gain (e.g., DSLRs - they call this ISO) let you play both sides of this trade-off.

Before leaving quantization error, I should note one more thing. We should not be overly harsh on cameras that have a 12-bit ADC and that have full-well capacities greater than 4,096 electrons. In truth, you would be hard-pressed to see any difference between a sensor with 10k e- of full-well capacity hooked up to a 12-bit ADC from one hooked up to a 16-bit ADC (which has 65,536 intensity steps). Why? The noise. The noise is already limiting what you can resolve. Let's say that 10,000 e- sensor has 10 e- worth of noise on it. The max SNR (aka the dynamic range) there is 1000 (aka 60 dB) as we know the last digit there is really run by the noise. Our 12-bit ADC with 4,096 intensity values (aka 72 dB) now doesn't look so bad. Run that same 10,000 e- sensor though with only 1 e- worth of noise and we have a dynamic range of 10,000 (aka 80 dB) and we're back to having a problem. The idea here is just because you have some # of electrons worth of full-well capacity doesn't mean you can really know exactly how many are in any given well (the noise) and that uncertainty means you don't have to have discrete values in your ADC for each possible option. Personally, I'd not worry a minute if my 16-bit ADC (65,536 steps) were hooked to a CCD with 100k e- worth of full-well.

Upshot: Noise

You're going to have noise. No matter what, some will be there. The trick is, of course, to minimize it. Much of the noise will come from inside the camera and the choice of camera is going to factor in a lot here. You want something with noise that is both low in level and well- behaved. We'll cover how to measure this in Part 2. Other noise will come from what you're shooting - both the target and the sky. The target's noise you can ignore (nothing you'll ever do about it), but the sky's noise is something that you may be able to work against. We'll cover this more in Part 3.

Signal

Boosting SNR can come from dropping the noise or boosting the signal. So, it's worth considering where the signal comes from and how we might go about boosting it. The first answer as to where it comes from is an obvious one. It comes from that faint fuzzy you're trying to image (duh!). Moving beyond this, we realize it must get from that faint fuzzy to the pixel on your sensor. Along the way, there are a few things to consider that affect the amount of signal you're getting.

The first is the aperture of your scope. The bigger it is, the more photons it sucks in and directs towards your sensor. A variant on this, though, is the f-ratio you're working at. There has been a lot written on this on the Internet and this is something I took on in a blog post of mine and will cover here eventually. The f-ratio, though, really is determining how much signal is hitting a given pixel when one is considering extended objects. If you keep the aperture constant and vary the f-ratio (by varying the focal length), you're trading off signal and resolution. Long focal lengths spread the image over more pixels giving you higher resolution, but cut the light hitting each pixel. The point of the "f-ratio myth" argument is that once we're above the read noise, the added resolution comes at little or no cost. Running longer focal lengths (higher f-ratios) gets you closer to this noise floor, however. But, the details of this are for another day. What's important here is that if you want to boost the SNR in a given pixel (and yes, you may not want/need to if you're already bright enough, but if you do ...) you can do so by dropping the f-ratio (increasing the aperture for a given focal length or decreasing the focal length for a given aperture).

The second thing to consider is related to this and it's the light throughput of your scope. Let's consider a Newtonian with simple Al coatings that give you about 87% light reflectivity have two surfaces, each at 87% making for a total throughput of 0.87 * 0.87 or about 76%. If we call this an 8" f/5 mirror with a 2.5" obstruction, our total throughput drops down to 66% of the light that came into the tube. This means we're working like an ideal 6.4" scope in terms of our light gathering. Now, let's boost those coatings to 95%. Before the obstruction, we're running 90% and with that in place, we're running 80% efficiency and like a perfect 7.1" scope. Hmmm... gettting those old mirrors recoated with something fancy just gave us a 14% boost in pure signal.

The third thing to consider the quantum efficiency (QE) of your camera. QE refers to the proportion of photons that are recorded out of the total number that hit the sensor. If you've got one camera with a QE of 40% and another with a QE of 80% and all else is equal, the one with the higher QE has twice the SNR as the one with the lower QE. Another way to think of it is that the boost in QE is like a boost in the aperture of your scope. Run a QE=80% camera on a 80 mm APO and you'll do as well as a QE=40% camera on a 113 mm APO (focal length and noise figures being equal). Yet one more way to think of it is that if you want to capture some number of photons to get your signal above the noise, it will take twice as much imaging time on the lower QE sensor.

One thing not to consider here is the darkness of the skies. People often think of this as a boost in the signal as the DSO sure seems brighter when you look at it. But, how would M51's photons know that you're in an urban vs. rural area? They don't, of course. The same number of photons are streaming towards you from M51 and it's not like the photons shooting up in the sky from the city lights are deflecting them. No, the signal is the same and it's just that the background level has been raised making it tougher to pull that signal out.

It's all about SNR

Remember at the outset when I said all those things come down to SNR? Aperture rules, right? Yes it does and it does because it boosts the SNR. Whether attached to a 3" or a 30" scope, your camera has the same read noise and the same thermal noise. Attached to a 30" scope, however, your camera is getting hit with 100x as many photons as when it's attached to a 3" scope (for the sake of the argument here, assume equal focal lengths.) That's quite a boost to the signal.


OK, that was easy. What about dark skies vs. urban skies? Dark skies have greater contrast and let you shoot longer, right? How's that SNR? Two things come in to play here. First, we can consider the case in which you expose for the same duration under both conditions. Here, the read noise and thermal noise are the same and the number of photons from the DSO are the same. So the signal and two components of the noise are the same. But, the shot noise is different. The camera doesn't know skyglow from the DSO as photons from the sky are photons from the sky. In our urban site, we may easily have a four times as many photons hitting our sensor from skyglow as we do in our city site. While we can stretch and set a black point to make the sky dark again, all those extra photons had all that extra shot noise. In fact, 4x the skyglow will lead to a doubling of the shot noise. Under urban skies, shot noise from the skyglow is usually the dominant source of noise, swamping out the read noise and thermal noise. So, we've kept the signal but doubled the noise and halved our SNR.


Monochrome vs. color cameras? Color filters cut your signal down to under a third of what it would have been without the color filters. You've cut the signal and kept the noise the same. Camera choice? Boost the QE and you boost the signal. Drop the noise and you've, well, dropped the noise. Either way, you've boosted the SNR.


What's Next?

Hopefully, most of the entries in this column won't be quite so long as this one. But, there was a lot of information to get through at the outset here. Coming up in Part 2 will be a tutorial on measuring the noise in your camera. It's really a lot easier than it sounds. After that, Part 3 will cover things you can do to try to improve the SNR in your images and how thinking about SNR may change the way you take your shots.



Clear skies!

Craig


  • Midnight Dan, decicco, Joe_C and 17 others like this


0 Comments



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics