Good show that you get it!!!
It is something which people all too often forget about when choosing and using cameras so maybe a quick review is in order.
When working with the concept we need to think in terms of Arc-seconds Per Pixel (APP). This is sort of a self-explanatory term in that our calculated/equipment version is about the number of arc-seconds of the sky each of your pixels will "see" with your telescope.
It is easy to forget that this is a mathematical concept which doesn't necessarily meet reality when you are out under the skies. . .
The concept of "sampling" which we tend to use is at least roughly based on the Nyquist Theorem. I think Starizona gives one of the best explanations of this theorem: https://starizona.co...harold-nyquist/ There are some people who swear that the Nyquist Theorem is the end-all and be-all and must dictate where you go. I think of the theorem as being worth keeping in mind as a ROT (Rule-Of-Thumb) but we should feel free to range outside its dictates. I'm one of those who believes that the Nyquist Theorem is technically inappropriate to astrophotography but that it is still a useful ROT with which to work.
The net effect is that we tend to think of needing a "box" of 2x2 or 3x3 pixels in the camera which is devoted to imaging each hypothetically pinpoint star. With fewer pixels you end up with technically unnecessarily blocky and/or smeared stars/details. This is under-sampling because you are using too few pixels to "sample" each star.
If you use more pixels than is strictly necessary in order to "sample" each star you also degrade your image. This is because each and every time each pixel is "read" there is associated "noise". This noise increases statistical uncertainty as to what is signal and what is not.
So if you used a 4x4 block of pixels to image/sample a star instead of just a 2x2 block of pixels? Well, this would mean you used 4x4=16 pixels to image the star instead of 2x2=4 pixels. Since 16/4=4, you used 4 times as many pixels to sample/image the star and that means 4 times as much read noise. If 2x2 pixels would have been enough, then 4x4 pixels means an unnecessary 4-fold increase in noise.
Since one can reasonably construe AP as being almost solely about developing a great SNR (remember that means Signal to Noise Ratio), unnecessary noise is something to avoid and that means over-sampling should be avoided.
Calculate your sampling by doing the following: 203x(pixel size in microns)/(focal length of the OTA in millimeters). So if you are using a camera with 4.63 micron pixels and an OTA with a 2000mm focal length? 203x4.63/2000=0.47APP.
Simple, right? Well, it turns out that the reality is messy which is why you shouldn't necessarily get too excited about this under a lot of conditions.
- Assuming that the Nyquist Theorem is roughly applicable (not a bad assumption even if not perfect), you have to remember that the stars tend to twinkle. That's because we're getting refraction from atmospheric turbulence which among other things is actually changing the apparent position of the star in the heavens. When you are doing a 20-minute sub-exposure this means that the star's light is getting spread around a bit. That alone will tend to make the star appear to be something other than pinpoint. So we have to consider the "seeing" in our location which is probably closer to 1.5-2 arc-seconds for most of us - and means that we should expect our stars to appear to have an apparent diameter at the focal plane of our OTA of 1.5-2 arc-seconds.
- Even if a target such as that jet is only 1 arc-second wide it doesn't mean that you cannot image it if your seeing and/or your sampling is a bit worse than 1 arc-second, You may still be able to tell that the feature is there but it won't be just 1 arc-second and will thus be an inaccurate portrayal of the target feature. "Inaccurate" does not necessarily mean "bad" when you are trying to make a pretty picture or are just trying to detect an object.
- Even if you have your sampling just right for the OTA, camera, and conditions you may be sabotaged by your mount. If your mount is jerking around over a 20-minute sub then your mount will be distorting your star's shape and size over that sub-exposure. Fortunately, most of us in this sub-forum are reducing our exposure to this problem by doing relatively short exposures.
- Even if everything else is done perfectly, the processing may turn out to be an issue. It may be that the computer processing algorithms are tending to spread out your stars - there is some discussion about this. And even if you aren't having an effective increase in APP of the stars during post-processing, it turns out that because of the way in which a file format such as TIFF handles the data you end up with data smeared out a bit (FITS doesn't do that). If you believe the processing is an issue then over-sampling may be helpful.
- If your OTA doesn't have great optics then the star's light may be aberrated at the focal plane and this will tend to make the star occupy more pixels.
- If your display won't display all the pixels you are trying to see, then there may be limited value to proper sampling. So let's say you are trying to display an un-zoomed 4K image of your target on a 640x480 monitor you aren't going to see much detail anyway and you could be grossly under-sampled without ever being able to tell the difference. If you have a great 4K image with great sampling being displayed on a 4K display which is 20 feet away from you - you won't see the finer details anyway and you shouldn't be bothered. This forum limits the size of the image you can display which means that you can only show limited detail if you aren't showing an un-zoomed portion of that big, beautiful image (but you can link to a larger image so this really isn't a big deal).
- If your camera is built around a modern low-noise CMOS sensor, your read noise may be so low that even major over-sampling may add very little to the overall noise level of the image. So the SNR hit from over-sampling may be negligible.
Some other things to think about?
Planetary AP is all about using lucky imaging at long focal lengths and with the system clearly considered to be over-sampled. It actually works pretty well with our modern CMOS sensors. . .
CMOSImager is being developed by CygnusBob and I believe will someday be bundled into Kstar: https://www.cloudyni...1/#entry8918799 The idea is to get very sharp images through the use of very short exposures and very precise autoguiding and processing. It is not really very fast imaging as it stands right now but with lots of computing horsepower and bandwidth this could some day be of interest to us. Right now it is an Apple computer project and I cannot try it out.
If one considers the fact that over-sampling with a modern CMOS sensor has little impact on the SNR, one might think that there is no down-side to over-sampling. This is not necessarily the case depending on your goals.
So let's assume you were using the Hyperstarred 8" SCT with an IMX183 camera? Even if it gives you sampling which you find to be quite good, you are also talking about a whole lot of data to process. I can remember someone did a really nice image of M1 (IIRC) using hundreds of short sub-exposures. It took several days for the computer to get through all the stacking. The point being that a huge amount of data means a huge amount of storage space and a huge amount of time processing. That wouldn't mean you couldn't do OAP (Observational AP) of a target with the IMX183 and the Hyperstarred 8" SCT, but you might want to locate the target and then define a small ROI (Region of Interest) and thus collect much less data to process.
One other thing to keep in mind although it doesn't apply too strongly to what most of us do. Those who are doing CAP (Conventional AP) will sometimes change their sampling in the process of their LRGB imaging. They may get their luminance with lots of sampling in order to get their detail. But then for the color (RGB) they may bin the system with reduced sampling because putting in the color just doesn't need as much detail. The key for us is that for some things we just don't need that much detail.
So yes, there is a lot going on with sampling. Avoiding under-sampling and over-sampling is an appropriate goal but sometimes it is difficult to sort out exactly what is over-sampling or under-sampling for a given situation. Nyquist gives us a good ROT to use but we shouldn't be too concerned about staying strictly within the typical constraints it would seem to impose.