Noah4x4,

I have to disagree with the guy with a Ph.D in optical Sciences ( ). There is __not__ always "a benefit of pixels that are very small" and here's why. It's reasonable to conclude that a camera with 3 micron pixels might work well with a diffraction limited system in space, but let's look at a real system under a real sky. In most locations, it is very difficult to experience 1" seeing conditions but that's a good number to use as a guess at the "best possible" conditions __using long exposures__ for many locations. For a C11 at F/10, the diameter of a 1" blur spot in the focal plane will be 13.5 microns. The sampling rate for 3 micron pixels will be 4.5 samples across the spot. With the C14, F/11 system that the OP asked about, the spot size will be about 19 microns so the same camera would provide a sampling rate of 6.3x across the spot. So under the very best conditions, a C11 will be sampled by 4.5x and a C14 will be sampled at 6.3x. Under most common conditions, the sampling rate with a 3 micron camera will be __even higher__.

So what's wrong with over-sampling? First, the SNR of the measurement due to photon noise is given by the square root of the signal strength. If you compare the signals give by two identical square pixels with dimensions of 3 microns and 9 microns, the 9 micron pixel will produce a signal that is *9x stronger* and an image with __3x better SNR__. No problem you say! The read noise is so low with the 3 micron CMOS camera that we can simply bin the data to get the same result as the camera with the larger pixels. Sure that works, but a hypothetical 16 Mpx 4096 x 4096 camera with 3 micron pixels then becomes a 1365 x 1365, 1.8Mpx camera with a field of view that's 1/3 that of a 4096 x 4096 camera with 9 micron pixels. Both the C14 and the C11 have outstanding field correction and can produce very sharp imaging out to a 52 mm circle. The penalty of using such small pixels is that you waste most of that field. Ah, but you say, I've got much better sampling so I'll be able to extract more information from the data to produce a sharper image. Unfortunately, that's unlikely to work for long exposure imaging so let's consider that issue next.

When we talk about sampling, we are really talking about how much information can be recovered from the image about the original object. The foundation of sampling theory goes back to Fourier theory and for a diffraction limited optical system it's pretty easy to work out the optimum sampling rate (I'll spare the math here.) In the case of a system with atmospheric blurring, it's a bit harder to come up with an exact value, but it's relatively easy to show that a rough rule of thumb using about 2-3 pixels across the point spread function (PSF) of the optical system works well. In the case of an atmospherically blurred system, the form of the PSF becomes close to something called a Moffat distribution. The Moffat distribution does a better job of describing the wings of a blurred star profile than a simple Gaussian function and you can read about it here:

https://en.wikipedia...at_distribution

https://www.ltam.lu/.../star_prof.html

Whether the distribution is Moffat or Gaussian is a minor point in determining the PSF for a seeing blurred system, which is what leads to some uncertainty about the best possible sampling rate. In reality, achieving a higher sampling rate to better define the form of the PSF will do very little to improve resolution because the frequency content difference between these various functions is extremely small. Just remember that the form of the image is given by the irradiance of the object convolved with the PSF, which is __always__ a smoothing operation so oversampling the PSF does virtually nothing to improve image resolution. The only way to effectively improve resolution is to reduce the extent of the PSF itself, which is the goal with short exposure, lucky imaging. We can put aside the math and simply apply the simple rule of thumb for sampling to the C11 to find that the optimum pixel size should be 4.5- 6.7 microns. For the C14, it comes out to 6.3 - 9.5 microns. In my experience at DSW, I've never once been able to reach 1" seeing and I've found that the best sampling limit is virtually always closer to 2x than 3x across an estimated 1" seeing blur disk. Remember that smaller pixels, produce a lower SNR so __unless you are really gaining something in resolution__, it is better to error on the side of larger pixels. On the other hand, if you are using lucking imaging with __very short exposures__ to filter out seeing effects, it's better to error on the side of smaller pixels even though it produces more noise. You'll just have to gather more data to gain an acceptable SNR level. Ultimately there is no free lunch. To get higher sampling (and maybe better resolution,) you sacrifice SNR and vice versa.

In my view, if you are doing __long exposure imaging__, you are better off starting with a sensor that covers the whole field of your telescope with appropriately sized pixels. For the C11 and C14 at the native Cassegrain focal plane, the optimum pixel size is in the range of 6-9 microns. I have direct experience on my C14 clearly demonstrating that 9 micron pixels work much better than 6 micron pixels--in terms of resolution, SNR, and FOV. Unfortunately, these large sensors are expensive so cost is a major consideration and that's the real reason that most folks hang small sensor cameras on these slow telescopes. It works but it's not optimum.

John

**Edited by jhayes_tucson, 03 November 2018 - 01:48 PM.**