Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Further Testing and Thoughts on Resolution With Full Color Sensors

  • Please log in to reply
6 replies to this topic

#1 aeroman4907

aeroman4907

    Apollo

  • -----
  • topic starter
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 22 September 2019 - 08:44 AM

There have been a number of topics lately covering mono and color sensors, resolution, and filters.  I had completed some preliminary testing to see if processing one of the color channels from my full color captures with a QHY 183C camera would yield better results for resolution rather than using a synthetic 'L' channel.  I decided to do a couple of tests again to confirm my prior initial and personal findings and they may have some slight benefit to others on this forum.

 

Since the data used is extrapolated from the same data set comparing L, R, G, B data from an imaging run, the differencing in seeing are negated.  It of course is obvious that the different color channels will be affected by the seeing due to their various wavelengths.  I performed some of my analysis on my best video run with the best seeing I have experienced personally, along with one of the worst nights of seeing that I have bothered to image under.

 

How a program debayers an image is interesting, and the algorithms seem to do a pretty good job in many respects.  In a prior topic I posted a moderately long time ago, I asked what was the resolution potential of a color sensor, and some stated that determining the resolution based upon green light might be correct, and others stated it was mostly based upon the sampling.  Due to the effects of seeing, I think it is evident that the seeing conditions and their effect on the various wavelengths of light have a primary role on resolution limits with color sensors, with sampling playing a supporting role.

 

To better understand how my QHY 183C sensor differs from the mono version, I found a chart the shows the Quantum Efficiency of the various wavelengths of light for the color and mono version of my sensor.  You can clearly see why the mono version of the sensor needs filters to get effective images as that is quite a range of light to capture at one time.  Even with producing simply a black and white image, narrowing the wavelength of light would in fact be important to getting a resolute image.

 

It is also apparent you generally wouldn’t want to use a filter for planetary or lunar imaging with the color sensor.  Since the bayer matrix uses two green, one blue, and one red per array, I don’t think a further blocking filter would be useful.  Say if one used a filter around 650 nm, you really only have ¼ of the pixels on the camera collecting any meaningful light.

 

183C vs 183M.png


  • james7ca and Jenz114 like this

#2 aeroman4907

aeroman4907

    Apollo

  • -----
  • topic starter
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 22 September 2019 - 08:47 AM

The first example is from a night of my best imaging.  The seeing wasn’t great by any means, but was the best I have experienced in my area.  I would say the Pickering was about 5 and it wasn’t too difficult to determine the scope was collimated and properly focused.  The example is from around Rima Hadley.  It is a heavily cropped area from the original panel.  I used PI to extract, the Luminance and relative color channels.  I then determined the best processing workflow setting with the Luminance channels and applied the exact settings to the other color channels.  My processed involved Lucy Richardson deconvolution with AstraImage (AI) as a plugin to PS.  I then performed TGV noise reduction in PI.  The next step involved, curves adjustments in PS, some contrast enhancement with AI, and a final round of light sharpening in PS.

 

The first series of images is how they appear prior to deconvolution or any other processing (aside from the initial stack with AS!3).  The images appear quite similar, but there are some differences that can be made upon close inspection.  When I first performed my testing a few months ago, I hade hope that using one of the color channels (such as red or green) might provide better results than processing the L channel.  For the images below, I saw very slight amounts of increased contrast and ever so slightly increase resolution in R and G versus L.  This is better viewed by blinking the images when they are stacked, but for presentation purposes for this thread, I have shown the images side by side.  Clearly the blue channel has less contrast and is not as resolute, which would be expected unless the seeing was excellent, which never happens where I live.

 

Good-Comparison-Unprocessed.jpg

 

After applying the processing, things get a bit interesting.  The processed L channel is the sharpest with the least amount of noise.  The sharpness is just slightly better than the G channel.  Red looks pretty sharp as well, but suffers from excessive noise.  Blue is interesting to me.  Where details do actually come through, such as Rima Hadley, they are finer in detail, but have less contrast.  Still, the overall resolution is less and of course the noise is quite high as well.

 

Good-Comparison-Processed.jpg

 

So with regard to this image, I think the bayer array in the R channel shows its weakness because only ¼ of the pixel array imaged in red, and the remaining 75% of the pixels were extrapolated for the R channel.  I think this explains the ‘noise’ in the processed image.  The same can be said for the B channel that also has significant noise, in addition to the fact that the B channel is more affected by less than ideal seeing conditions.  Green has double the pixels in the array, so there is less interpolation of pixels and thus less ‘noise’.  I think seeing on that night was sufficiently good for my 8” scope that if I had a mono camera, a green filter probably would have been the best choice as well.

 

Finally, since the L channel is pulling information from all the color channels, the effects of debayering the image are less and you get a result that is slightly better than the best single color channel you obtain with the color sensor.  I would say with the larger number of green pixels in the array, the performance of the system will skew towards the performance of the G channel.

 


  • Jenz114 likes this

#3 aeroman4907

aeroman4907

    Apollo

  • -----
  • topic starter
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 22 September 2019 - 08:49 AM

For additional thoughts, I performed a second series of testing on a night of quite poor seeing.  The only reason I imaged on this night was to get the color levels more accurately tuned on my camera.  As I recall it, the Pickering was about 2.  Determining proper collimation and focus was nigh impossible and I just did my best effort.

 

Comparing the different channels below prior to processing, it is difficult to see which image might deconvolve better, mostly because seeing and the ultimate capture results were quite poor.  Perhaps is see R having the ever so slightest advantage due to what I perceive as a slight increase in contrast of features.

 

LQ-Comparison-Unprocessed.jpg

 

Now reviewing the processed images below, I would again give the nod to the L channel.  Better details than the G channel but less noise overall, as well as better contrast.  Here the R channel appears to have the most detail and contrast, but noise is very problematic.   Blue of course is dead last.  These results are to be expected considering the very poor seeing conditions that would dictate that the image closer to the IR end of the spectrum would be less affected by the seeing.  Clearly if I had imaged with a mono camera, I would need to go as deep into the IR end of the spectrum to get a decent image.  The flip side to the coin would be the commensurate decrease in resolution that also occurs at imaging the longer wavelength.

 

LQ-Comparison-Processed.jpg

 

Another thought about processing the R channel alone.  To combat the noise in that image, I had to increase the NR to a point that it again reduced the details and looked excessively processed.  So again, the L channel produced the best overall result.

 

As a last thought I would like to say that a color sensor can compete in resolution with a mono camera with a filter.  I think that might be a fair statement except where seeing is poor enough that a mono camera requires an IR filter.  I also think if a mono camera uses a IR filter and green would have been appropriate, then a color sensor would then perform better – but that would be due to user error on the mono camera.  Of course a mono camera with the properly selected filter will not have any bayer effects so I expect that to have an edge overall, but I still think the color camera could come close to those results, especially if the appropriate color filter for the mono camera was green.

 


  • Jenz114 likes this

#4 Jenz114

Jenz114

    Explorer 1

  • -----
  • Posts: 54
  • Joined: 15 Aug 2019
  • Loc: Southwest Missouri, USA

Posted 22 September 2019 - 06:36 PM

Wow! Thank you for such an interesting presentation. I feel like it was written especially for me, as this topic has been on my mind a lot lately.

I would also be interested in working with the separate colors channels captured by my color camera if possible. What is the "PI" program you used? If I wanted to work with the green channel, for example, my first idea (which is probably misguided) was to zero out the red and blue using RGB balance in Registax after processing the avi file in AutoStakkert.


  • aeroman4907 likes this

#5 Jenz114

Jenz114

    Explorer 1

  • -----
  • Posts: 54
  • Joined: 15 Aug 2019
  • Loc: Southwest Missouri, USA

Posted 22 September 2019 - 07:07 PM

Another thought here about the color channels. For better results lunar imagers often use an orange filter, which I'm assuming is something like a Wratten #21 or Baader Orange. Both of these filter examples seem to reject wavelengths below about 550nm, which is the approximate area where QE drops off in the blue channel. Would ignoring or deleting the blue channel from the data have approximately the same effect as using an orange filter, by discarding this lower sharpness blue channel, which wouldn't show up anyway when using the low wavelength rejection filters?

Edit: I am going to say a filter would still be the superior method, as the green and red channels would still be sensitive below the 550nm mark in my example. Channel rejection though would give the advantage of being able to reprocess old data or improve the results for those who don't already have the appropriate filter.


Edited by Jenz114, 22 September 2019 - 07:24 PM.

  • aeroman4907 likes this

#6 aeroman4907

aeroman4907

    Apollo

  • -----
  • topic starter
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 23 September 2019 - 06:53 AM

Wow! Thank you for such an interesting presentation. I feel like it was written especially for me, as this topic has been on my mind a lot lately.

I would also be interested in working with the separate colors channels captured by my color camera if possible. What is the "PI" program you used? If I wanted to work with the green channel, for example, my first idea (which is probably misguided) was to zero out the red and blue using RGB balance in Registax after processing the avi file in AutoStakkert.

I am glad this helped you some Anthony, and thanks for the feedback and compliment!

 

PI stands for PixInsight.  It is more of a program used for processing deep sky object (DSO) images like galaxies, nebula, etc., but it has some features that I like to use with processing lunar images.  If you have a program like Photoshop (PS), you can also easily do the same thing by going to the Channels tab and copying the data from there.  I just happen to like using PI for these functions and creating the synthetic luminance channel.  If you don't have PS either, I would have to imagine most photo processing programs have some ability to create separated color channels.



#7 aeroman4907

aeroman4907

    Apollo

  • -----
  • topic starter
  • Posts: 1054
  • Joined: 23 Nov 2017
  • Loc: Castle Rock, Colorado

Posted 23 September 2019 - 06:57 AM

Another thought here about the color channels. For better results lunar imagers often use an orange filter, which I'm assuming is something like a Wratten #21 or Baader Orange. Both of these filter examples seem to reject wavelengths below about 550nm, which is the approximate area where QE drops off in the blue channel. Would ignoring or deleting the blue channel from the data have approximately the same effect as using an orange filter, by discarding this lower sharpness blue channel, which wouldn't show up anyway when using the low wavelength rejection filters?

Edit: I am going to say a filter would still be the superior method, as the green and red channels would still be sensitive below the 550nm mark in my example. Channel rejection though would give the advantage of being able to reprocess old data or improve the results for those who don't already have the appropriate filter.

I didn't include a test showing combination of green and red data and excluding blue, but without going into a further presentation on the matter, I found that the combined result of green and red also did not yield a better result than simply processing the Luminance data.  Keep in mind that the color information is still useful, but I use Luminance to process for fine details.  I process the color information differently and then combine it with the Luminance data in PS.  This is a similar method used to process DSO images.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics