•

# Signal to Noise: Understanding it, Measuring it, and Improving itPart 4 - Image Sampling

## Craig Stark

We're returning to the issue of signal-to-noise in this installment a bit, but it'll take us a chunk of the column to get there directly. The topic in this round is image sampling and what I want to impress upon readers is that there is a tradeoff here. This tradeoff is between photons per pixel (our signal) and spatial resolution. We play this tradeoff all the time, but many don't really think of it as a tradeoff. Whenever we choose a pixel size for our camera, a focal length for our scope or f-ratio, a binning factor, the use of a reducer, etc., we're taking a position in this tradeoff. The goal here in this installment is to understand the trade-off. For the short answer, I'm going to argue that there's little use in going below ~1" / pixel (at least for mere mortals) and that doing so can come at the cost of a bit of SNR (of course it also often comes at the cost of a reduced field of view). So, I'm going to suggest that if you're running a 12 megapixel DSLR with 5 micron pixels on a 12" f/10 scope with its 3 m focal length, you may to want to reconsider things (as you're running at about 0.34" / pixel - a resolution your skies probably don't support and that may be costing you).

### SNR Recap (on the one hand)

Recall from Part 2, of the series that the Signal part of SNR is:

TotalSignal = Duration * (Target + Skyglow + Dark)

and that the Noise part of SNR is:

We can put this together and talk about the SNR of our target in any given pixel as being:

Now, our TargetSignal here is the number of photons we capture from our DSO and the TotalSignal is the total number of photons captured (those from the DSO plus those from the sky and those from the dark current). This is the SNR for the target in a given pixel. When considering SNR, there comes a point of diminishing returns and for bright objects like foreground stars, the SNR is always high enough that we needn't worry about it. For cores of galaxies and such, a similar case can be made. We have plenty SNR and a 50% boost in the SNR isn't going to be noticed.

Where we do notice a real boost in the SNR is, of course, on the dim end of the scale where the SNR is a lot lower to begin with. Here, our TargetSignal is low and its value is often getting close to the noise. This is where we need to be most concerned about SNR.

So, when thinking about a tradeoff between spatial resolution and SNR, we should keep in mind:

1. Signal comes from photons from our target. More photons, more signal, better SNR. When we're talking about SNR at the level of a pixel in our image, we're talking about photons hitting this pixel and this pixel only.

2. One source of noise is shot noise and with brighter skies and longer exposures, this will come to dominate the noise term.

3. The other source of noise is read noise and with darker skies (or very low overall photon flux rates as you get in line-filter imaging) and shorter exposures, this can be a substantial part of the noise.

### Spatial Resolution: A Visual Analogy (on the other hand)

On most nights, your scope may do well looking at the moon or Jupiter at 100x (if it doesn't, consider a new scope). On a good number of nights, you may be able to push it up to 250x and be getting a good bit more detail than you would see at 100x. But, you'll be noticing about now that the image is dimmer. Why? Well, the same number of photons came off of Jupiter and went through your scope's objective at both powers, but you've now spread the light out over a bigger area. The same number of photons spread over a larger area means fewer photons per bit of area (or per square arc-second of sky).

Now, if you've got, say, an 8" scope, in theory here you should be able to push it up to 400x using the classic "50x per inch of aperture" rule. A 12" should get you go 600x, etc. How many nights can you actually do this and see more than you could at 250x? My guess is the answer is not many. Sure, on those perfect nights you can, but most nights, the atmosphere is blurring the image enough (the "seeing") that there just isn't enough resolution in the data - in that image coming from Jupiter - to get you anything extra.

The atmosphere is providing a spatial filter, blurring the image. It limits the resolution you can ever get. Your scope provides another spatial filter, also blurring the image. Both cut out the high frequency details and limit what can be resolved (aka, limit your spatial resolution). For your scope, the aperture provides a limit based on diffraction that you'll never exceed. Even if the spatial frequency you want to capture is below this limit, the atmosphere itself can blur things enough to limit your ability to get this.

For anyone who has tried to stick an eyeball to a scope on something bright, the above should be obvious by now. But, let's now extend this just a bit. Your eye will integrate information for about 0.1 second. Think of leaving the camera's shutter open for 100 ms as a decent analogy. Now, the seeing that causes very rapid, small scale distortions hits your eye's resolution, but seeing that happens at a slower scale (that is often larger, causing shifts of the image more than fine scale distortions) doesn't really hurt your eye's resolution. But, if you imagine keeping your eye still for 5 minutes and exposing (integrating information) over that entire time, you can bet this would limit the resolution of your eye.

Well, that's what's happening when you image. You've got the camera fixed in the same point in space (hopefully) and you've got that shutter open for a long time. Without any form of adaptive optics, there is no hope for taking the seeing factor out of your image. If you've got skies like most of us, on a pretty good night, you're looking at 2 to 3 arcseconds of blur that will be imposed on your image. That means, at best - if your guiding is perfect, if your focus is perfect, if your collimation is perfect, and if your skies are pretty darn good - you've got a 2-3" blur imposed on your image. Sure, some folks will have skies that are a bit better than this, and some nights you will too, but by and large, we've got to consider the fact that there's a good 2" or more blur that the atmosphere is giving you.

### Sampling and Image Scale Defined

When the image is focused on our CCDs, it's a nice, smooth analog image. Under perfect conditions, you'd see nice Airy disks around the stars (at least for a moment when the skies are stable). Of course, we'll never see those Airy disks in our long-exposure images (that darn atmosphere again), but let's think for the moment about perfection. What would hit our CCD would look something like the image shown here on the left. This is the output of Aberrator showing a double-star (sampled at its default of 0.1"/pixel, 8" f/5, perfect scope, 3" separation).

 Aside: The only reason the two on the right look blocky here is that I've blown them up to match the original image's image scale. If you zoom way out so that those blocks are pixels on your screen, they look like nice stars. For very faint stars, even if you're running your scope at a very high resolution, you'll still see things looking blocky as the edges of the stars get lost in the noise and only the peak shows through. If you were to look at the audio waveform coming off of your CD player, you'd see a similar, blocky representation of the waveform. Instead of smooth waves, you'd see discrete steps. All digital signals will have this as we're quantizing (turning into discrete numbers, aka digitizing) the data. What happens in your CD player though is that there is a "lowpass filter" that cuts out the very high frequency information (higher than you can hear and higher than the sampling rate). This turns those discrete steps or blocks into a smooth waveform. Why? To actually make hard edges like stair-steps takes large amounts of very high frequency information. Remove that and you're left with the lower frequency information only (still at the limits of what we can hear), which is "smoother". The more you smooth a waveform or an image, the more high-frequency bits you're removing. By resampling this image with a bicubic filter, I'm saying that there must be a smoothness to it. That's why it can reconstruct things so well (as, in truth, the real image has a smoothness to it.) If you can have this analogy in mind and think of it in terms of spatial resolution in your image, the concepts covered here may make some more sense.

In the middle and the righthand panels, I've resampled this smooth star image into something our CCDs might record. Since we have individual pixels on the CCD, the recorded image will look a bit blocky. Just how blocky it is depends on the sampling rate. One's first reaction will be to say that we want the one on the left or perhaps the one in the middle. That is, we want our stars sampled very well so that they don't look like blocky squares. To a real extent, this is true, but there is going to be a tradeoff here and before we jump to running at as high a resolution as possible, we need to consider what we're gaining and what we're losing. By way of a preview in where I'll be going with this, consider the inset images above. I took those same blocky stars and just resampled them up to the original resolution (bicubic resampling in Photoshop).

The image scale of your rig is determined by two factors: 1) the focal length of your telescope, and 2) the size of pixels on your camera. We can use these to compute the image scale in arcseconds per pixel (assuming your focal length is given in millimeters and the pixel size in microns) as:

ImageScale = 206.265 * PixelSize / FocalLength

For example, my Borg 101 ED scope when run at f/4 has a focal length of about 400 mm and my QSI 540 camera has a pixel size of 7.4u. This leads to an image scale of 3.8" / pixel. So, every pixel is covering 3.8 arcseconds of sky. Were I to run this same camera on the Celestron C8 I have here (at prime focus), I'd be at 0.76" / pixel. If you don't know the image scale for your various rigs, put this down and go compute it now.

Once we know this, we can easily compute the field of view (FOV). It's just the sampling rate times the number of pixels in each direction on our sensor. My QSI 540 has a square chip with each side having 2048 pixels. So on the Borg there, I'm at about 7782 arcseconds or just over 2.1 degrees of sky in each direction.

### Seeing Limits Places a Lower Bound on Useful Image Sampling

Scan across various Internet sites and groups and you'll see a number of discussions on what is the "optimal" image sampling. To begin with, there is no one optimal value. If your skies might permit a sampling at one rate but covering the target requires a lower rate, well the lower rate is more optimal than the higher one. However, if the target is small, running at that higher rate will be better. But, the point of most discussions on this is that there is a limit to how well we should sample the image. That is, you won't gain anything by going lower than a certain number of arcseconds per pixel when sampling your image.

One nice treatment of this is Stan Moore's page on pixel sampling. In it, he describes things in terms of pixels per FWHM and he suggests that there is a resolution loss if you're at 1.5 pixels per FWHM and that running just a bit over 3 pixels / FWHM (3.5 seems to be a value he likes) represents about all you're going to get. So, if we plug in skies with a FWHM of 3", this leads to a value of just under 1" / pixel. Going beyond that means you're not just sampling the image well, you're really oversampling the image. That is, you're not gaining anything in terms of resolution by going at a higher rate (at the end, he gives a range of 0.5 - 1.5" / pixel for this limit which will depend on your seeing, your tracking, your gear, etc.).

Overall, I'm in agreement with Stan Moore. While he tends to put this forth as "if you want the most resolution, go for ~1" / pixel and don't skimp out at say 2" / pixel", I tend to think of it in the reverse. That is to say that there's no reason for most of us to sample at rates much higher than 1" / pixel (aka with image scales much lower than 1"/pixel) and that really, much of the time even this isn't going to buy you a heck of a lot of actual resolution. One of us is a glass-half-full and the other a glass-half-empty approach (not sure which is which, but I think I'm the empty one).

 Aside: Another way to think about this, if you like, is to imagine (or actually print this out and do it) looking at a test chart that has lines at progressively finer and finer gradations. Norman Koren has some excellent ones you can print and use as targets. Now, if you image this, you'll find that there is a point at which you cannot resolve line pairs anymore. This is your spatial resolution limit. If you were to place the target across a grassy field, you'd be able to resolve a finer difference than if you place the target across a road or parking lot. Why? The blur you get from the heat rising off the road will limit your ability to resolve the lines. Try it with an eyepiece if you like to see what you're up against.

You'll find this advice in numerous places. One bit of good coverage is in Apogee's CCD University. There, they suggest as a rule of thumb dividing the typical seeing by 2 and Starizona has the same suggestion. This isn't to say that's as best as you could possibly do, but it's getting the lion's share of the resolution your skies will afford. For 3" skies then, this would amount to 1.5" / pixel. Heck, in a review I just read by Clay Sherrod of the PlaneWave CDK-17 (Astronomy Technolgy Today, v3(4)), he said that when the scope is run at ~2000 mm of focal length (f/4.7) it "does not match well" with the 7.4u pixels on his SBIG ST-2000 camera but that it is "an incredibly good match" when binned 2x2. Unbinned, the sampling rate is 0.75 " / pixel and binned it is 1.5" / pixel. You also see professional observatories do this. The 8.2 m Subaru telescope at the Keck Observatory runs at 0.2" / pixel, certainly a lot higher than 1" / second. They do have 261 robotic fingers morphing the shape of their mirror for real-time active optics to help considerably. They also have skies that, quite often, are at 0.4" FWHM. Even with active optics, they're only sampling at about half a good night's FWHM of seeing. So, there's some reasoning or at least tradition behind this rule of thumb.

### Demonstrating Seeing and Sampling's Effects

It's one thing to hear these ideas discussed and it's another to really see the effects. As mentioned in previous articles in this series, I've written a CCD simulator that tries to mimic what a CCD does in building an image. It takes an effectively perfect image (Hubble's M51 in the form of a 420Mb FITS file), adds skyglow and seeing (modeled as a Gaussian blur which, for long exposures is a reasonable approximation), pixelates the image, adds read and shot noise and quantizes it, all according to the well-known models of basic CCD behavior outlined in the first two parts of this series. By using this, we can see what the effects of various sampling rates and seeing conditions have on an image to get a feel for what we should expect and for how seeing and sampling rates interact.

Here, I've used the simulator to demonstrate the effects of 2-4" worth of seeing when sampled at 0.5" - 3" / pixel under otherwise ideal conditions (perfect camera, perfect optics, perfect tracking, and perfectly dark skies).

We can see, of course, that overall, if you've got 2" FWHM worth of atmospheric blurring, you're a good bit better off than if you've got 4" FWHM worth or blurring. In addition, there's a solid gain in sharpness in the 2" FWHM condition as you move from 3" / pixel of sampling down to about 1.0" / pixel with perhaps a touch more to be gotten out of the 0.5" image, but only a touch. In the 3" FWHM condition, I'm not picking up any more detail below 1.5" / pixel and in the 4" FWHM condition, I'm not picking up any below 2" / pixel.

 Aside: Oddly enough, you can even argue that something like the 2" / pixel condition in the 4" FWHM looks better than the 0.5" / pixel condition. Why might this be? Even though the simulated camera has no read noise (it's a perfect camera in this regard), there is quantization error as the image is "digitized" into a 16-bit signal. The lower the signal gets, the more prominent this error, and read noise's error, become in the image. We'll return to this a bit later, but it starts to show the downside of oversampling.

What this is saying is that a spatial blur (here imposed by my simulated atmosphere) limits the effective maximum resolution in the image. We're not gaining anything by running that 4" FWHM image at 0.5" / pixel. We just don't have the spatial frequencies in the image, so there's no point in sampling at a rate that would record the high frequencies (that lead to our sharp edges).

You can get a feel for this in images yourself. Here, I've taken a shot (another Hubble shot), and blurred it by 2 pixels in Photoshop. This cuts out high frequencies in the image and softens the image a touch. I then copied the image, shrunk it down to 75% of its original size, and then enlarged it back to the original size.

If I'd done this with the image before blurring, I'd have seen a clear difference between the original and the one I shrunk and resized. By shrinking the image, I'm cutting out high frequencies (as I've now sampled the image at a lower resolution). So, when I blow it back up, those high frequencies will be gone and the image will be softer as a result. But, if I soften it ahead of time, as you can see here, there's no loss in sharpness in the shrink and re-enlarge. I never had the spatial resolution in the image to begin with, so it wasn't lost when I shrunk it. After the initial 2-pixel blur, I was now oversampled, so I could afford to do this shrink and re-enlarge without losing any detail. So, if my image were blurred like this already, there would be no need to store it at this 100% size. If I had it in this 75% size wanted it "bigger", I could just blow it up and it'd look just as sharp as if I'd had it at 100% in the first place.

 Aside, if you want to impose a spatial blur of a known FWHM in Photoshop or ImageJ, you need to keep in mind that Photoshop's Gaussian blur (and ImageJ's) specify the blur in terms of the standard deviation or sigma of the Gaussian. There's a nice formula we can use though to convert the two as FWHM = 2.35 * sigma (or sigma = FWHM / 2.35). So, if our image is at, say 1.5" / pixel and we want a FWHM blur of 2", this blur's FWHM is 1.25 pixels (2 / 1.5). To get Photoshop or ImageJ to give a FWHM of 1.25 pixels, we'd use the Gaussian blur tool and enter in a "radius" value of 0.53 (sigma = 1.25 pixels FWHM / 2.35)

For those of you that want actual stars from an actual telescope on an actual camera, here is some data from a quick test shot comparing unbinned and binned (0.75" / pixel vs. 1.5" / pixel) images from a Celestron C8 on my QSI 540 (this is a small crop of a small galaxy in the frame). The raw data were stacked (no pre-processing, so forgive the hot pixel trails) and stretched linearly to match the histograms. On the left, we have the unbinned data and in the middle, we have the binned data. You can see the more pixelated stars in the binned data since the image was enlarged to equate the image size by a simple zoom. If we actually resample the middle image to the original resolution and make the two images have the same pixel count, we have the image on the right. This sure doesn't look like a two to one difference in resolution to my eyes. There may be a touch of a difference between the one on the left (native 0.75" / pixel) and the one on the right (acquired at 1.5" / pixel and resampled to 0.75" / pixel), but it's certainly not huge.

Do not confuse this and think that I'm saying we should always run binned or that nobody will ever see a difference between binned an unbinned or between 0.75" / pixel and 1.5" / pixel. What I'm saying is that here in my skies with my gear, the spatial resolution of the image hitting my CCD on this night needed to be only sampled at ~1.5" / pixel and that going down below this to 0.75" didn't buy me much of any practical significance in terms of resolution. If I had better skies, better optics, better focus, better hair, whiter teeth, and six-pack abs, perhaps I'd be seeing a bigger effect. Heck, on some nights I do see a somewhat reasonable boost going a bit below 1.5", but most nights its just not there and 1" / pixel would really be oversampling.

### The Downsides of Oversampling

So far, we've been coming at this by saying that there is a limit to what you can expect out of your system and that there's little point in going below this and sampling at a higher rate. At the outset, though, I said that there is a tradeoff, however, which means the higher sampling rate is coming at a cost. Just what is that cost?

There are three costs to consider here. One should be rather obvious and that is that if you're sampling at a higher rate by changing the focal length of your scope, you're going to cover less sky. I've got 2048 pixels and run at 1.5" / pixel, I'm covering 51". If I run at 0.5" / pixel, I'm covering 17" of sky. Now, there are some good number of small targets that this won't matter for, but there are a good number of larger targets for which it will matter. If you're not gaining anything resolution-wise by running at the higher rate, why sacrifice coverage of more sky? Why not give yourself more breathing room around the target? Sure, it may look smaller on the image, but that's what the crop tool is made for! (Of course, if you change the sampling by binning your CCD or by using a different CCD with the same physical dimensions but different pixel sizes, the FOV won't change.)

The third cost is one of SNR. If we keep the aperture of our scope constant and only change the focal length (i.e., change the f-ratio by reducing or extending it), we don't change the total number of photons going into our scope. The DSO is streaming photons from space and our scope is catching them with a bucket the size of our aperture. Running at a higher sampling rate means spreading the light from the DSO across more pixels.

Thus, each pixel is getting less light and so the signal hitting that pixel is less. Some aspects of the noise (e.g., read noise) will be constant (not scale with the intensity of the signal the way shot noise does). Thus as the signal gets very faint, it gets closer and closer to the read noise. As we get closer and closer to the noise floor, the image looks crummier and crummier. Doubling the focal length (aka one f-stop or doubling the sampling rate) will have 25% as much light hitting the CCD well, meaning we will be that much closer to the read noise. If the exposure length is long enough such that the bits of the galaxy or nebula are still well above this noise, it matters little if at all. But, if we are pushing this and just barely above the noise (or if our camera has a good bit of noise), this will more rapidly come into play. (Furthermore, who among us doesn't routinely have other faint bits that it'd be great to pull out from the image?)

Please note, that none of what I am saying here contradicts Stan Moore's "f-ratio myth" page. He makes this same point and if you look closely at the images on his site, the lower f-ratio shot does appear to have less noise. As noted, itâ€™s not "10x better" (which some people who say f-ratio is all that matters would argue), but itâ€™s not the same either. Stan argues that, "There is an actual relationship between S/N and f-ratio, but it is not the simple characterization of the â€˜f-ratio mythâ€™." What I'm arguing here is to try to make clear that other side. F-ratio (and therefore image sampling) doesn't rule the day and account for everything, but it also isn't entirely irrelevant.

Here, Iâ€™ve taken some data from Mark Keitelâ€™s site. Mark was kind enough to post FITS files of M1 taken through an FRC 300 at f/7.8 and f/5.9 and to give me permission to use them. I ran a DDP on the data and used Photoshop to match black and white points and to crop the two frames.

To get a better view, here is a crop around the red and yellow circled areas. In each of these, the left image is the one at f/7.8 and the right at f/5.9 (as you might guess from the difference in scale. Now, look carefully at the circled areas. You can see there is more detail recorded at the lower f-ratio (lower sampling rate). We can see the noise here in the image and that these bits are closer to the noise floor.

Again, the point is that itâ€™s incorrect to say that the f-ratio rules all and that a 1â€ scope at f/5 is equal to a 100â€ scope at f/5, but itâ€™s also wrong to say that under real-world conditions, itâ€™s entirely irrelevant. For a given aperture, f-ratio and image sampling rate are synonymous.

Is it a huge effect? No, but it's one that will be present to varying degrees and one that can hit you where it hurts. If you're running with a line filter and trying to get that faint H-alpha image and are already pushing to get 5, 10, or 15 minute shots to show much of anything, you're running down near the read noise. If you're down near the read noise, you're SNR in that part of the DSO is very low. Spreading the light across more pixels will drop the SNR and make that part look crummy. Run at a lower resolution (smaller f-ratio, lower sampling rate, etc.) and you're getting more photons to hit that same CCD well, getting you further away from the read noise.

Therefore, for the same exact reasons why f-ratio matters some, image sampling matters some when it comes to target SNR. As noted in the Aside above, binning has a very similar effect here. Under the right (or maybe that should be "wrong") circumstances, your SNR will go down as you oversample. Taken to extreme levels of oversampling (e.g., 0.1" / pixel) you darn well better be able to expose individual subframes long enough to get your signal well above this.

### Conclusions

Hopefully, at this point, you've got a good idea not only of what image sampling is, but also that there is a bit of a tradeoff when trying to pick an image sampling rate. I'd like to leave off with a few basic conclusions:

1. Sampling rate is defined by the focal length of your telescope and the size of the pixels. Adjusting either (changing scopes, using focal reducers, changing cameras, using binning, etc.) will change the sampling rate.

2. There is no one, perfect, thou-shalt-always-use image sampling rate.

3. Even if you decide upon a target sampling rate of something like 1" / pixel, don't go making drastic changes to your system if you can currently hit 0.9" / pixel or 1.1" / pixel. You won't notice a difference in resolution or SNR. Values here are guidelines and they're not hard and fast numbers.

4. Your skies, equipment, and ability to get the most out of the equipment are going to place a bound on how much resolution you can get out of your image. When starting out, you and the equipment may establish that boundary. Once you've got focus and guiding down well, though, the skies are likely going to be the determining factor. Running at sampling rates much below a half or a third of your skies' FWHM isn't going to bring in much if any more resolution in your shot. Other things blurred it enough before it even got to you that there's just nothing more you can wring out of it. For most of us then, a value of about 1" / second will be as high a sampling rate as we should use when trying to do high-resolution work. For a lot of us, for normal imaging, you won't be losing much (if anything) by even running at 1.5" / pixel. (I, personally, use 1.25" to 1.5" as a good target sampling rate for small targets and a lot more than this for wide targets. My favorite rig runs at over 3" / pixel).

5. Running at very high sampling rates has several downsides. You're FOV may be more limited, tracking errors are more visible, and SNR can be reduced.

6. For many cameras, these points taken together suggest that your scope's focal length can be limited to ~1500 mm with ~1000 mm being a fine target value for high-resolution work. Most DSLRs have pixel sizes of about 5 microns. Many dedicated CCDs have pixel sizes of 6.4 or 7.4u (a few go up to 9 u). If we take 6.4u then as a fairly typical value we find that 1" / pixel is reached at 1320 mm of focal length. 1.5" / pixel is down at 880 mm of focal length. Before you put that DSLR onto a 3000 mm scope, be aware that you're solidly over on the other side of this tradeoff, asking for resolution you almost certainly don't have. In the process, you've given up a few good things: FOV (assuming you can change focal length to affect sampling), ease of guiding (and perhaps sanity or some hair), and a bit of low-level SNR. Instead, start looking at shorter focal length setups or cameras with much larger pixel sizes (the former are much easier to find). Imaging will be a lot more fun and you won't actually have given up much if anything in the resolution department.

Until next time, clear and dark skies,

Craig

• OrionSword likes this