There have been a number of topics lately covering mono and color sensors, resolution, and filters. I had completed some preliminary testing to see if processing one of the color channels from my full color captures with a QHY 183C camera would yield better results for resolution rather than using a synthetic 'L' channel. I decided to do a couple of tests again to confirm my prior initial and personal findings and they may have some slight benefit to others on this forum.
Since the data used is extrapolated from the same data set comparing L, R, G, B data from an imaging run, the differencing in seeing are negated. It of course is obvious that the different color channels will be affected by the seeing due to their various wavelengths. I performed some of my analysis on my best video run with the best seeing I have experienced personally, along with one of the worst nights of seeing that I have bothered to image under.
How a program debayers an image is interesting, and the algorithms seem to do a pretty good job in many respects. In a prior topic I posted a moderately long time ago, I asked what was the resolution potential of a color sensor, and some stated that determining the resolution based upon green light might be correct, and others stated it was mostly based upon the sampling. Due to the effects of seeing, I think it is evident that the seeing conditions and their effect on the various wavelengths of light have a primary role on resolution limits with color sensors, with sampling playing a supporting role.
To better understand how my QHY 183C sensor differs from the mono version, I found a chart the shows the Quantum Efficiency of the various wavelengths of light for the color and mono version of my sensor. You can clearly see why the mono version of the sensor needs filters to get effective images as that is quite a range of light to capture at one time. Even with producing simply a black and white image, narrowing the wavelength of light would in fact be important to getting a resolute image.
It is also apparent you generally wouldn’t want to use a filter for planetary or lunar imaging with the color sensor. Since the bayer matrix uses two green, one blue, and one red per array, I don’t think a further blocking filter would be useful. Say if one used a filter around 650 nm, you really only have ¼ of the pixels on the camera collecting any meaningful light.