- A Review of Teeter STS18
- MesuMount 200 Review
- First Light with the Prototype 8x42 Space WalkerTM 3D Binoculars
- INTERSTELLARUM DEEP-SKY ATLAS (FIELD EDITION) REVIEW
- THE BAADER BBHS-SITALL SILVER DIAGONAL
- Explore Scientific AR 102
- Review: davejlec's Paralellogram Mount
- Annals of the Deep Sky, Volumes One and Two
- Discovery 17.5” Split Tube Dobsonian Telescope
- REVIEW OF SUMERIAN OPTICS ALKAID 16” TRAVEL SCOPE
- Astrotrac TP3065 Pier Review
- Apo-tmosphere: Gutekunst ADC Review
- Optolong LRGB Filter Testing and Comparison with Baader LRGB Filters
- First Light Review: Teeter Custom TT Planet Killer 16" f/5.4
- The Baader Planetarium Morpheus
CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.
Discuss this article in our forums
Signal to Noise: Understanding it,
Measuring it, and Improving it
|Aside: The only reason the two on the right look blocky here is that I've blown them up to match the original image's image scale. If you zoom way out so that those blocks are pixels on your screen, they look like nice stars. For very faint stars, even if you're running your scope at a very high resolution, you'll still see things looking blocky as the edges of the stars get lost in the noise and only the peak shows through. If you were to look at the audio waveform coming off of your CD player, you'd see a similar, blocky representation of the waveform. Instead of smooth waves, you'd see discrete steps. All digital signals will have this as we're quantizing (turning into discrete numbers, aka digitizing) the data. What happens in your CD player though is that there is a "lowpass filter" that cuts out the very high frequency information (higher than you can hear and higher than the sampling rate). This turns those discrete steps or blocks into a smooth waveform. Why? To actually make hard edges like stair-steps takes large amounts of very high frequency information. Remove that and you're left with the lower frequency information only (still at the limits of what we can hear), which is "smoother". The more you smooth a waveform or an image, the more high-frequency bits you're removing. By resampling this image with a bicubic filter, I'm saying that there must be a smoothness to it. That's why it can reconstruct things so well (as, in truth, the real image has a smoothness to it.) If you can have this analogy in mind and think of it in terms of spatial resolution in your image, the concepts covered here may make some more sense.|
In the middle and the righthand panels, I've resampled this smooth star image into something our CCDs might record. Since we have individual pixels on the CCD, the recorded image will look a bit blocky. Just how blocky it is depends on the sampling rate. One's first reaction will be to say that we want the one on the left or perhaps the one in the middle. That is, we want our stars sampled very well so that they don't look like blocky squares. To a real extent, this is true, but there is going to be a tradeoff here and before we jump to running at as high a resolution as possible, we need to consider what we're gaining and what we're losing. By way of a preview in where I'll be going with this, consider the inset images above. I took those same blocky stars and just resampled them up to the original resolution (bicubic resampling in Photoshop).
The image scale of your rig is determined by two factors: 1) the focal length of your telescope, and 2) the size of pixels on your camera. We can use these to compute the image scale in arcseconds per pixel (assuming your focal length is given in millimeters and the pixel size in microns) as:
For example, my Borg 101 ED scope when run at f/4 has a focal length of about 400 mm and my QSI 540 camera has a pixel size of 7.4u. This leads to an image scale of 3.8" / pixel. So, every pixel is covering 3.8 arcseconds of sky. Were I to run this same camera on the Celestron C8 I have here (at prime focus), I'd be at 0.76" / pixel. If you don't know the image scale for your various rigs, put this down and go compute it now.
Once we know this, we can easily compute the field of view (FOV). It's just the sampling rate times the number of pixels in each direction on our sensor. My QSI 540 has a square chip with each side having 2048 pixels. So on the Borg there, I'm at about 7782 arcseconds or just over 2.1 degrees of sky in each direction.
Seeing Limits Places a Lower Bound on Useful Image Sampling
Scan across various Internet sites and groups and you'll see a number of discussions on what is the "optimal" image sampling. To begin with, there is no one optimal value. If your skies might permit a sampling at one rate but covering the target requires a lower rate, well the lower rate is more optimal than the higher one. However, if the target is small, running at that higher rate will be better. But, the point of most discussions on this is that there is a limit to how well we should sample the image. That is, you won't gain anything by going lower than a certain number of arcseconds per pixel when sampling your image.
One nice treatment of this is Stan Moore's page on pixel sampling. In it, he describes things in terms of pixels per FWHM and he suggests that there is a resolution loss if you're at 1.5 pixels per FWHM and that running just a bit over 3 pixels / FWHM (3.5 seems to be a value he likes) represents about all you're going to get. So, if we plug in skies with a FWHM of 3", this leads to a value of just under 1" / pixel. Going beyond that means you're not just sampling the image well, you're really oversampling the image. That is, you're not gaining anything in terms of resolution by going at a higher rate (at the end, he gives a range of 0.5 - 1.5" / pixel for this limit which will depend on your seeing, your tracking, your gear, etc.).
Overall, I'm in agreement with Stan Moore. While he tends to put this forth as "if you want the most resolution, go for ~1" / pixel and don't skimp out at say 2" / pixel", I tend to think of it in the reverse. That is to say that there's no reason for most of us to sample at rates much higher than 1" / pixel (aka with image scales much lower than 1"/pixel) and that really, much of the time even this isn't going to buy you a heck of a lot of actual resolution. One of us is a glass-half-full and the other a glass-half-empty approach (not sure which is which, but I think I'm the empty one).
|Aside: Another way to think about this, if you like, is to imagine (or actually print this out and do it) looking at a test chart that has lines at progressively finer and finer gradations. Norman Koren has some excellent ones you can print and use as targets. Now, if you image this, you'll find that there is a point at which you cannot resolve line pairs anymore. This is your spatial resolution limit. If you were to place the target across a grassy field, you'd be able to resolve a finer difference than if you place the target across a road or parking lot. Why? The blur you get from the heat rising off the road will limit your ability to resolve the lines. Try it with an eyepiece if you like to see what you're up against.|
You'll find this advice in numerous places. One bit of good coverage is in Apogee's CCD University. There, they suggest as a rule of thumb dividing the typical seeing by 2 and Starizona has the same suggestion. This isn't to say that's as best as you could possibly do, but it's getting the lion's share of the resolution your skies will afford. For 3" skies then, this would amount to 1.5" / pixel. Heck, in a review I just read by Clay Sherrod of the PlaneWave CDK-17 (Astronomy Technolgy Today, v3(4)), he said that when the scope is run at ~2000 mm of focal length (f/4.7) it "does not match well" with the 7.4u pixels on his SBIG ST-2000 camera but that it is "an incredibly good match" when binned 2x2. Unbinned, the sampling rate is 0.75 " / pixel and binned it is 1.5" / pixel. You also see professional observatories do this. The 8.2 m Subaru telescope at the Keck Observatory runs at 0.2" / pixel, certainly a lot higher than 1" / second. They do have 261 robotic fingers morphing the shape of their mirror for real-time active optics to help considerably. They also have skies that, quite often, are at 0.4" FWHM. Even with active optics, they're only sampling at about half a good night's FWHM of seeing. So, there's some reasoning or at least tradition behind this rule of thumb.
Demonstrating Seeing and Sampling's Effects
It's one thing to hear these ideas discussed and it's another to really see the effects. As mentioned in previous articles in this series, I've written a CCD simulator that tries to mimic what a CCD does in building an image. It takes an effectively perfect image (Hubble's M51 in the form of a 420Mb FITS file), adds skyglow and seeing (modeled as a Gaussian blur which, for long exposures is a reasonable approximation), pixelates the image, adds read and shot noise and quantizes it, all according to the well-known models of basic CCD behavior outlined in the first two parts of this series. By using this, we can see what the effects of various sampling rates and seeing conditions have on an image to get a feel for what we should expect and for how seeing and sampling rates interact.
Here, I've used the simulator to demonstrate the effects of 2-4" worth of seeing when sampled at 0.5" - 3" / pixel under otherwise ideal conditions (perfect camera, perfect optics, perfect tracking, and perfectly dark skies).
We can see, of course, that overall, if you've got 2" FWHM worth of atmospheric blurring, you're a good bit better off than if you've got 4" FWHM worth or blurring. In addition, there's a solid gain in sharpness in the 2" FWHM condition as you move from 3" / pixel of sampling down to about 1.0" / pixel with perhaps a touch more to be gotten out of the 0.5" image, but only a touch. In the 3" FWHM condition, I'm not picking up any more detail below 1.5" / pixel and in the 4" FWHM condition, I'm not picking up any below 2" / pixel.
|Aside: Oddly enough, you can even argue that something like the 2" / pixel condition in the 4" FWHM looks better than the 0.5" / pixel condition. Why might this be? Even though the simulated camera has no read noise (it's a perfect camera in this regard), there is quantization error as the image is "digitized" into a 16-bit signal. The lower the signal gets, the more prominent this error, and read noise's error, become in the image. We'll return to this a bit later, but it starts to show the downside of oversampling.|
What this is saying is that a spatial blur (here imposed by my simulated atmosphere) limits the effective maximum resolution in the image. We're not gaining anything by running that 4" FWHM image at 0.5" / pixel. We just don't have the spatial frequencies in the image, so there's no point in sampling at a rate that would record the high frequencies (that lead to our sharp edges).
You can get a feel for this in images yourself. Here, I've taken a shot (another Hubble shot), and blurred it by 2 pixels in Photoshop. This cuts out high frequencies in the image and softens the image a touch. I then copied the image, shrunk it down to 75% of its original size, and then enlarged it back to the original size.
If I'd done this with the image before blurring, I'd have seen a clear difference between the original and the one I shrunk and resized. By shrinking the image, I'm cutting out high frequencies (as I've now sampled the image at a lower resolution). So, when I blow it back up, those high frequencies will be gone and the image will be softer as a result. But, if I soften it ahead of time, as you can see here, there's no loss in sharpness in the shrink and re-enlarge. I never had the spatial resolution in the image to begin with, so it wasn't lost when I shrunk it. After the initial 2-pixel blur, I was now oversampled, so I could afford to do this shrink and re-enlarge without losing any detail. So, if my image were blurred like this already, there would be no need to store it at this 100% size. If I had it in this 75% size wanted it "bigger", I could just blow it up and it'd look just as sharp as if I'd had it at 100% in the first place.
|Aside, if you want to impose a spatial blur of a known FWHM in Photoshop or ImageJ, you need to keep in mind that Photoshop's Gaussian blur (and ImageJ's) specify the blur in terms of the standard deviation or sigma of the Gaussian. There's a nice formula we can use though to convert the two as FWHM = 2.35 * sigma (or sigma = FWHM / 2.35). So, if our image is at, say 1.5" / pixel and we want a FWHM blur of 2", this blur's FWHM is 1.25 pixels (2 / 1.5). To get Photoshop or ImageJ to give a FWHM of 1.25 pixels, we'd use the Gaussian blur tool and enter in a "radius" value of 0.53 (sigma = 1.25 pixels FWHM / 2.35)|
For those of you that want actual stars from an actual telescope on an actual camera, here is some data from a quick test shot comparing unbinned and binned (0.75" / pixel vs. 1.5" / pixel) images from a Celestron C8 on my QSI 540 (this is a small crop of a small galaxy in the frame). The raw data were stacked (no pre-processing, so forgive the hot pixel trails) and stretched linearly to match the histograms. On the left, we have the unbinned data and in the middle, we have the binned data. You can see the more pixelated stars in the binned data since the image was enlarged to equate the image size by a simple zoom. If we actually resample the middle image to the original resolution and make the two images have the same pixel count, we have the image on the right. This sure doesn't look like a two to one difference in resolution to my eyes. There may be a touch of a difference between the one on the left (native 0.75" / pixel) and the one on the right (acquired at 1.5" / pixel and resampled to 0.75" / pixel), but it's certainly not huge.
Do not confuse this and think that I'm saying we should always run binned or that nobody will ever see a difference between binned an unbinned or between 0.75" / pixel and 1.5" / pixel. What I'm saying is that here in my skies with my gear, the spatial resolution of the image hitting my CCD on this night needed to be only sampled at ~1.5" / pixel and that going down below this to 0.75" didn't buy me much of any practical significance in terms of resolution. If I had better skies, better optics, better focus, better hair, whiter teeth, and six-pack abs, perhaps I'd be seeing a bigger effect. Heck, on some nights I do see a somewhat reasonable boost going a bit below 1.5", but most nights its just not there and 1" / pixel would really be oversampling.
|Aside: What does binning do? Ideally (if your binning is happening inside the hardware of the camera), a group of pixels on your CCD have their charge combined before being read out. For example, in 2x2 binning, a total of 4 pixels have their charge combined. This will, of course, cut your spatial resolution in half. For it, you can get a boost in the maximum dynamic range and you get a bit of a boost in the SNR. You don't double your SNR, however. The shot noise from the target, the dark current, and the skyglow is still there and since it's driven by the signal intensity (which went up 4x), it's going up as well. Where you can get a real win is in the fact that in this bigger pixel, there is only one element of read noise. Unbinned, each pixel will have independent read noise added. When the signal is very faint and when your noise is dominated by read noise (you've got dark skies or are running with a line filter), binning can help boost the SNR on extended targets. But, as the target brightens overall (you're getting away from the read noise and quantization error) and as the noise shifts to shot noise from things like the skyglow, binning isn't really boosting the SNR much. Sure, it looks "brighter", but stretch the unbinned image and you can brighten it up. Some of the same reasons why oversampling can hurt you are the reasons why binning can help you. But, some of the same reasons that don't make oversampling too horrible, make binning a bit less useful than many might think.|
The Downsides of Oversampling
So far, we've been coming at this by saying that there is a limit to what you can expect out of your system and that there's little point in going below this and sampling at a higher rate. At the outset, though, I said that there is a tradeoff, however, which means the higher sampling rate is coming at a cost. Just what is that cost?
There are three costs to consider here. One should be rather obvious and that is that if you're sampling at a higher rate by changing the focal length of your scope, you're going to cover less sky. I've got 2048 pixels and run at 1.5" / pixel, I'm covering 51". If I run at 0.5" / pixel, I'm covering 17" of sky. Now, there are some good number of small targets that this won't matter for, but there are a good number of larger targets for which it will matter. If you're not gaining anything resolution-wise by running at the higher rate, why sacrifice coverage of more sky? Why not give yourself more breathing room around the target? Sure, it may look smaller on the image, but that's what the crop tool is made for! (Of course, if you change the sampling by binning your CCD or by using a different CCD with the same physical dimensions but different pixel sizes, the FOV won't change.)
A second cost is more psychological with its physical manifestations coming out as self-induced hair loss. I started out giving the example of a DSLR run on a 12" f/10 system at prime focus. Here, you're looking at 0.34" / pixel. Let's say that you've gotten your guiding accuracy down pretty darn well and your RMS error in RA is under an arcsecond. You should be happy and if you're imaging at 1 or 1.5"/pixel your stars will look nice and round. Image at very high resolutions though and your residual guiding error will still come through. At the "lower resolution" (but still high enough for what your skies can support), you'd never know about this error, you'd like your shots and be proud of what you're doing. In short, you'd be happy and get to enjoy the hobby and take pride in the level of accuracy you've achieved. You've gotten guiding down well enough that everything the atmosphere allows you to have resolution wise, you've gotten and your stars are still round. You'd still have your hair (well, you would if you started with it). But, run at very high resolutions by oversampling and this all goes away. That ignorance being bliss thing goes out the window and it does so for no good reason. You're frustrated and spend nights trying to fine tune your guiding so that the stars come out round even at this level of magnification. You lose valuable imaging time and gain what? Round stars at an image scale your skies can't support anyway. For me, life's too short to worry about that. Give me the happy-imager recipe.
The third cost is one of SNR. If we keep the aperture of our scope constant and only change the focal length (i.e., change the f-ratio by reducing or extending it), we don't change the total number of photons going into our scope. The DSO is streaming photons from space and our scope is catching them with a bucket the size of our aperture. Running at a higher sampling rate means spreading the light from the DSO across more pixels.
Thus, each pixel is getting less light and so the signal hitting that pixel is less. Some aspects of the noise (e.g., read noise) will be constant (not scale with the intensity of the signal the way shot noise does). Thus as the signal gets very faint, it gets closer and closer to the read noise. As we get closer and closer to the noise floor, the image looks crummier and crummier. Doubling the focal length (aka one f-stop or doubling the sampling rate) will have 25% as much light hitting the CCD well, meaning we will be that much closer to the read noise. If the exposure length is long enough such that the bits of the galaxy or nebula are still well above this noise, it matters little if at all. But, if we are pushing this and just barely above the noise (or if our camera has a good bit of noise), this will more rapidly come into play. (Furthermore, who among us doesn't routinely have other faint bits that it'd be great to pull out from the image?)
Please note, that none of what I am saying here contradicts Stan Moore's "f-ratio myth" page. He makes this same point and if you look closely at the images on his site, the lower f-ratio shot does appear to have less noise. As noted, itâ€™s not "10x better" (which some people who say f-ratio is all that matters would argue), but itâ€™s not the same either. Stan argues that, "There is an actual relationship between S/N and f-ratio, but it is not the simple characterization of the â€˜f-ratio mythâ€™." What I'm arguing here is to try to make clear that other side. F-ratio (and therefore image sampling) doesn't rule the day and account for everything, but it also isn't entirely irrelevant.Here, Iâ€™ve taken some data from Mark Keitelâ€™s site. Mark was kind enough to post FITS files of M1 taken through an FRC 300 at f/7.8 and f/5.9 and to give me permission to use them. I ran a DDP on the data and used Photoshop to match black and white points and to crop the two frames.
To get a better view, here is a crop around the red and yellow circled areas. In each of these, the left image is the one at f/7.8 and the right at f/5.9 (as you might guess from the difference in scale. Now, look carefully at the circled areas. You can see there is more detail recorded at the lower f-ratio (lower sampling rate). We can see the noise here in the image and that these bits are closer to the noise floor.
Again, the point is that itâ€™s incorrect to say that the f-ratio rules all and that a 1â€ scope at f/5 is equal to a 100â€ scope at f/5, but itâ€™s also wrong to say that under real-world conditions, itâ€™s entirely irrelevant. For a given aperture, f-ratio and image sampling rate are synonymous.
Is it a huge effect? No, but it's one that will be present to varying degrees and one that can hit you where it hurts. If you're running with a line filter and trying to get that faint H-alpha image and are already pushing to get 5, 10, or 15 minute shots to show much of anything, you're running down near the read noise. If you're down near the read noise, you're SNR in that part of the DSO is very low. Spreading the light across more pixels will drop the SNR and make that part look crummy. Run at a lower resolution (smaller f-ratio, lower sampling rate, etc.) and you're getting more photons to hit that same CCD well, getting you further away from the read noise.
Therefore, for the same exact reasons why f-ratio matters some, image sampling matters some when it comes to target SNR. As noted in the Aside above, binning has a very similar effect here. Under the right (or maybe that should be "wrong") circumstances, your SNR will go down as you oversample. Taken to extreme levels of oversampling (e.g., 0.1" / pixel) you darn well better be able to expose individual subframes long enough to get your signal well above this.
Hopefully, at this point, you've got a good idea not only of what image sampling is, but also that there is a bit of a tradeoff when trying to pick an image sampling rate. I'd like to leave off with a few basic conclusions:
Sampling rate is defined by the focal length of your telescope and the size of the pixels. Adjusting either (changing scopes, using focal reducers, changing cameras, using binning, etc.) will change the sampling rate.
There is no one, perfect, thou-shalt-always-use image sampling rate.
Even if you decide upon a target sampling rate of something like 1" / pixel, don't go making drastic changes to your system if you can currently hit 0.9" / pixel or 1.1" / pixel. You won't notice a difference in resolution or SNR. Values here are guidelines and they're not hard and fast numbers.
Your skies, equipment, and ability to get the most out of the equipment are going to place a bound on how much resolution you can get out of your image. When starting out, you and the equipment may establish that boundary. Once you've got focus and guiding down well, though, the skies are likely going to be the determining factor. Running at sampling rates much below a half or a third of your skies' FWHM isn't going to bring in much if any more resolution in your shot. Other things blurred it enough before it even got to you that there's just nothing more you can wring out of it. For most of us then, a value of about 1" / second will be as high a sampling rate as we should use when trying to do high-resolution work. For a lot of us, for normal imaging, you won't be losing much (if anything) by even running at 1.5" / pixel. (I, personally, use 1.25" to 1.5" as a good target sampling rate for small targets and a lot more than this for wide targets. My favorite rig runs at over 3" / pixel).
Running at very high sampling rates has several downsides. You're FOV may be more limited, tracking errors are more visible, and SNR can be reduced.
For many cameras, these points taken together suggest that your scope's focal length can be limited to ~1500 mm with ~1000 mm being a fine target value for high-resolution work. Most DSLRs have pixel sizes of about 5 microns. Many dedicated CCDs have pixel sizes of 6.4 or 7.4u (a few go up to 9 u). If we take 6.4u then as a fairly typical value we find that 1" / pixel is reached at 1320 mm of focal length. 1.5" / pixel is down at 880 mm of focal length. Before you put that DSLR onto a 3000 mm scope, be aware that you're solidly over on the other side of this tradeoff, asking for resolution you almost certainly don't have. In the process, you've given up a few good things: FOV (assuming you can change focal length to affect sampling), ease of guiding (and perhaps sanity or some hair), and a bit of low-level SNR. Instead, start looking at shorter focal length setups or cameras with much larger pixel sizes (the former are much easier to find). Imaging will be a lot more fun and you won't actually have given up much if anything in the resolution department.
Until next time, clear and dark skies,
- OrionSword likes this
Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics