Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Modern Color vs Mono Sensor Sensitivity for Narrowband?

  • Please log in to reply
12 replies to this topic

#1 MalVeauX

MalVeauX

    Cosmos

  • *****
  • topic starter
  • Posts: 7,509
  • Joined: 25 Feb 2016
  • Loc: Florida

Posted 03 July 2020 - 09:21 AM

Hey all,

 

Interesting that more and more color cameras are being pushed rather than mono lately, it seems. Not just for DSO (cooled color no amp glow with narrowband filters, but still a color sensor), but also for solar system. The 462MC for example is color and more sensitive to long wavelength, etc, since all the chatter is there regarding that new sensor compared to the classic 224 and the 290 sensors.

 

So here's the question....

 

Is there a relatively good metric or means to show the sensitivity difference between the modern color sensor versus the mono sensor?

 

I ask because while these color sensors are dropping with better long wavelength sensitivity, the idea there is that we're using narrowband filters to even get to the IR wavelengths. So it begs the question.... what's the sensitivity difference, real world, not theorhetical, of say a 290MM with a 742nm IR filter compared to something like the upcoming 462MC with the same 742nm IR filter? I realize the graphs show sensitivity difference, but it also makes me wonder.... using the same sensor, such as the 290MM and 290MC both with a 742nm IR filter, just to see what is the actual real world difference in sensitivity, IE, how much gain difference will be needed to get the same histogram?

 

Things I'm very curious about:

 

IMX183MM and IMX183MC with narrowband filters (610nm to 742nm), what's the real world difference in sensitivity (ie, how much gain needed to compensate to get the same histogram)?

290MM and 290MC with the same filter and question as above?

 

This is leading into questions on sensors like the 462MC and the 533MC which are more sensitive than color sensors before. So I'm curious how they compare to a mono sensor for narrowband filter captures.

 

Thanks!

 

Very best,



#2 jmorales21

jmorales21

    Vostok 1

  • -----
  • Posts: 101
  • Joined: 21 May 2015
  • Loc: Chicago, IL

Posted 03 July 2020 - 11:07 AM

The sensors are really the same, they just add micro red, green and blue filters to create the Bayer matrix.
When you use narrow band filters, only those pixels under a filter that lets that wavelength pass will record any data at all. So whatever narrow band filter you use (H alpha, OIII or any other), you will only register useful data on a fraction (a third or so) of the pixels in your array. This will be definitely noticeable in your images.

 

Hope I was able to explain myself.

 

Clear skies. Jose



#3 ButterFly

ButterFly

    Apollo

  • *****
  • Posts: 1,003
  • Joined: 07 Sep 2018

Posted 03 July 2020 - 11:09 AM

You won't get the same histogram, ever.  The QE of the underlying pixel is the same, but there is a Bayer matrix over the color ones.  The "net QE" for that wavelength at each pixel thus varies.  The resulting signal with the narrowband will be the based on the transmission of the Bayer matrix over that pixel at that wavelength. 

 

Many of those graphs conveniently end at both sides of the visible spectrum.  It makes planning like that which you seek very difficult.  The IR response of green can be huge.  If you had the transmission info for all the channels at some wavelength, the best you can do is scale (with some average over the band for each Bayer element), superpixel, and dither like crazy.

 

Your task is not impossible.  You just need the Bayer characteristics for those wavelengths and a lot of data so you can regain some lost resolution with drizzle.  To get a sensible gain difference, you need to compare with binned mono.



#4 MalVeauX

MalVeauX

    Cosmos

  • *****
  • topic starter
  • Posts: 7,509
  • Joined: 25 Feb 2016
  • Loc: Florida

Posted 03 July 2020 - 11:14 AM

Thanks all,

 

The sensors are really the same, they just add micro red, green and blue filters to create the Bayer matrix.
When you use narrow band filters, only those pixels under a filter that lets that wavelength pass will record any data at all. So whatever narrow band filter you use (H alpha, OIII or any other), you will only register useful data on a fraction (a third or so) of the pixels in your array. This will be definitely noticeable in your images.

 

Hope I was able to explain myself.

 

Clear skies. Jose

 

 

You won't get the same histogram, ever.  The QE of the underlying pixel is the same, but there is a Bayer matrix over the color ones.  The "net QE" for that wavelength at each pixel thus varies.  The resulting signal with the narrowband will be the based on the transmission of the Bayer matrix over that pixel at that wavelength. 

 

Many of those graphs conveniently end at both sides of the visible spectrum.  It makes planning like that which you seek very difficult.  The IR response of green can be huge.  If you had the transmission info for all the channels at some wavelength, the best you can do is scale (with some average over the band for each Bayer element), superpixel, and dither like crazy.

 

Your task is not impossible.  You just need the Bayer characteristics for those wavelengths and a lot of data so you can regain some lost resolution with drizzle.  To get a sensible gain difference, you need to compare with binned mono.

 

 

This is sort of what I'm trying to get information on. In theory, only a third of the signal is reaching the sensor due to the Bayer matrix as you pointed out. But in reality, it doesn't seem to correlate with any precision to this. Such as setting an exposure time of 10ms in narrowband, let's say 656nm for example, and then the gain difference being equivalent to this ratio, let's say mono sensor has 0 gain going on, what would the color sensor need to accomplish the same histogram with that exposure time? Or if the gain on the mono sensor was 100 or even 200 for example, what would the real world histogram matching gain be on the color sensor?

 

If I had two identical cameras, one mono and one OSC, I would do some tests myself, but I don't have them, I have all mono sensors. I know a lot of people have both mono/color of the same camera. So I'm curious of their real world results and thoughts of the color sensor relative to the mono sensor with narrowband use (such as red and IR). Again, in theory, 1/3rd of the information is coming through the color sensor with a narrowband filter most likely. But does that really result in that difference in gain, with other parameters being equal, being that linear? So far, I've not seen a linear relationship. It seems more often that the difference is less and less. I may be wrong there. That's why I'm asking. I'm curious how close a modern color sensor is, in sensitivity, knowing the matrix is present, compared to a mono sensor, both receiving a norrowband signal (say red or IR) and all other parameters being equal. I'm curious the actual gain difference to get a similar if not same histogram match in that signal wavelength.

 

Very best,


  • jmorales21 likes this

#5 Tom Glenn

Tom Glenn

    Mercury-Atlas

  • -----
  • Posts: 2,637
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 03 July 2020 - 11:34 AM

If the sensor is the same, then the sensitivity of that sensor is the same no matter what camera you put it in.  When you use a color camera, you see the histogram for each channel RGB in the raw live window.  Light of a particular wavelength is only reaching a fraction of the pixels in the sensor because of the overlying bayer matrix filters, but the sensitivity of those pixels to a particular wavelength of light is the same no matter where they are located, so whether or not you get a response depends upon the transmission characteristics of the filters in the bayer matrix.  Also consider that the debayering process uses interpolation, so that an algorithm guesses what the response of each pixel to RGB would have been, even though each pixel only sees one color and not all 3.  This introduces the possibility of a lack of precision compared to a mono camera, but this is not the same as sensitivity of the sensor itself.  Although each pixel on a color camera only responds to one color, the final image has three color RGB values for each pixel, because of the debayering calculations.  In practice, the loss of precision here is usually less than the errors introduced by other limiting factors in imaging, most notably the seeing, which is why color cameras produce images that are nearly identical to mono cameras.  However, because the bayer matrix is fixed in place, color cameras do lack versatility for the sake of simplifying the imaging process.  



#6 jmorales21

jmorales21

    Vostok 1

  • -----
  • Posts: 101
  • Joined: 21 May 2015
  • Loc: Chicago, IL

Posted 03 July 2020 - 11:37 AM

Thanks all,

 

 

 

 

 

This is sort of what I'm trying to get information on. In theory, only a third of the signal is reaching the sensor due to the Bayer matrix as you pointed out. But in reality, it doesn't seem to correlate with any precision to this. Such as setting an exposure time of 10ms in narrowband, let's say 656nm for example, and then the gain difference being equivalent to this ratio, let's say mono sensor has 0 gain going on, what would the color sensor need to accomplish the same histogram with that exposure time? Or if the gain on the mono sensor was 100 or even 200 for example, what would the real world histogram matching gain be on the color sensor?

 

If I had two identical cameras, one mono and one OSC, I would do some tests myself, but I don't have them, I have all mono sensors. I know a lot of people have both mono/color of the same camera. So I'm curious of their real world results and thoughts of the color sensor relative to the mono sensor with narrowband use (such as red and IR). Again, in theory, 1/3rd of the information is coming through the color sensor with a narrowband filter most likely. But does that really result in that difference in gain, with other parameters being equal, being that linear? So far, I've not seen a linear relationship. It seems more often that the difference is less and less. I may be wrong there. That's why I'm asking. I'm curious how close a modern color sensor is, in sensitivity, knowing the matrix is present, compared to a mono sensor, both receiving a norrowband signal (say red or IR) and all other parameters being equal. I'm curious the actual gain difference to get a similar if not same histogram match in that signal wavelength.

 

Very best,

You will also lose resolution due to only using pixels that are more spaced out. The software may interpolate data for the missing chunks, but you want real data, not a guess based on a mathematical function that may not be close to the real deal.



#7 Tom Glenn

Tom Glenn

    Mercury-Atlas

  • -----
  • Posts: 2,637
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 03 July 2020 - 11:49 AM

You will also lose resolution due to only using pixels that are more spaced out. The software may interpolate data for the missing chunks, but you want real data, not a guess based on a mathematical function that may not be close to the real deal.

This is true, although because of the common limitations in planetary imaging, typically the seeing conditions, this loss in resolution is almost never noticeable in final image outcomes.  This speaks to the particular limitations we encounter in planetary imaging.  If we were always imaging under ideal conditions above the atmosphere, without any turbulence, or clouds to contend with, or elevation above the horizon to worry about, we would probably all want to use monochrome sensors to improve the outcome.  But given the real world we find ourselves in, the color cameras do have much to offer.


  • jmorales21 likes this

#8 ButterFly

ButterFly

    Apollo

  • *****
  • Posts: 1,003
  • Joined: 07 Sep 2018

Posted 03 July 2020 - 12:32 PM

Again, in theory, 1/3rd of the information is coming through the color sensor with a narrowband filter most likely. But does that really result in that difference in gain, with other parameters being equal, being that linear?

You can't just say 1/3 in theory either.  If the green of the Bayer matrix passed only green, then there would be zero contribution from the green band.  But the green of the Bayer does NOT only pass green.  That is where the data regarding the transmission of the green in the filter's band in the IR would be used.

 

For example, there is some underlying QE of the sensor at the filter's band.  That will not change.  However, if the Bayer matrix's green response at the filter's band can be approximated by some straight line fit, you can get an average transmission over the filter's band for green pixels.  The green pixel would then show photons * QE * average transmission instead of photons * QE for mono.  The gain difference for green is whatever it takes to get the same ADUs out.  The average transmissions in the filter's band are likely different for red and blue than for green.  If the filter's band is too broad, a straight line fit may not even be appropriate at all.  The total gain difference would then be some average of the average transmission for two greens a red and a blue as compared to four pixels in mono to get the same ADUs.

 

The data may be out there for a particular Bayer matrix.  If you can find that, you can get ballpark estimates.



#9 DMach

DMach

    Apollo

  • -----
  • Posts: 1,473
  • Joined: 21 Nov 2017
  • Loc: The most light-polluted country in the world :(

Posted 04 July 2020 - 11:54 PM

This is true, although because of the common limitations in planetary imaging, typically the seeing conditions, this loss in resolution is almost never noticeable in final image outcomes.  This speaks to the particular limitations we encounter in planetary imaging.  If we were always imaging under ideal conditions above the atmosphere, without any turbulence, or clouds to contend with, or elevation above the horizon to worry about, we would probably all want to use monochrome sensors to improve the outcome.  But given the real world we find ourselves in, the color cameras do have much to offer.

Something that has always bubbled away in the back of my mind:

 

Whilst for any given still frame the debayering process will have to make a best guess at the missing pixels, in planetary work at least:

  1. We stack hundreds or thousands of individual frames, and
     
  2. There will be a natural dithering of data across the sensor. (The exact same pixel region on your target is extremely unlikely the same position on the sensor from one frame to the next.)

Would this not also help to improve the accuracy of the interpolation in the final result?



#10 Tom Glenn

Tom Glenn

    Mercury-Atlas

  • -----
  • Posts: 2,637
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 05 July 2020 - 12:08 AM

Something that has always bubbled away in the back of my mind:

 

Whilst for any given still frame the debayering process will have to make a best guess at the missing pixels, in planetary work at least:

  1. We stack hundreds or thousands of individual frames, and
     
  2. There will be a natural dithering of data across the sensor. (The exact same pixel region on your target is extremely unlikely the same position on the sensor from one frame to the next.)

Would this not also help to improve the accuracy of the interpolation in the final result?

Darren, you are 100% correct.  In fact, AS!3 uses a "smart debayering" algorithm that makes use of the dithered frames, and this is why it is always recommended to let AS!3 debayer the raw files rather than use some other program to do so.  

 

Anyone who follows terrestrial photography may be aware of Sony's "pixel shift" technology (and similar methods) which has been around for a few years now.  This is found in some of Sony's mirrorless cameras, and the sensor shifts by 1 pixel in four directions and takes an image for each position, then combines the outcomes in proprietary software.  The idea is precisely to get real color information in each channel for each pixel, and avoid any imprecision from the bayer matrix.  In practice, for terrestrial photography, this technology is really more of a marketing gimmick, with the exception of product placement photography, because real world subjects move (even landscapes).  But the idea is valid for static subjects, and with planetary imaging we have thousands of frames, and so there is very accurate information for each pixel. 


  • DMach likes this

#11 coralaholic

coralaholic

    Vostok 1

  • -----
  • Posts: 173
  • Joined: 05 May 2019

Posted 05 July 2020 - 01:14 AM

Hey all,

 

Interesting that more and more color cameras are being pushed rather than mono lately, it seems. Not just for DSO (cooled color no amp glow with narrowband filters, but still a color sensor), but also for solar system. The 462MC for example is color and more sensitive to long wavelength, etc, since all the chatter is there regarding that new sensor compared to the classic 224 and the 290 sensors.

 

So here's the question....

 

Is there a relatively good metric or means to show the sensitivity difference between the modern color sensor versus the mono sensor?

 

I ask because while these color sensors are dropping with better long wavelength sensitivity, the idea there is that we're using narrowband filters to even get to the IR wavelengths. So it begs the question.... what's the sensitivity difference, real world, not theorhetical, of say a 290MM with a 742nm IR filter compared to something like the upcoming 462MC with the same 742nm IR filter? I realize the graphs show sensitivity difference, but it also makes me wonder.... using the same sensor, such as the 290MM and 290MC both with a 742nm IR filter, just to see what is the actual real world difference in sensitivity, IE, how much gain difference will be needed to get the same histogram?

 

Things I'm very curious about:

 

IMX183MM and IMX183MC with narrowband filters (610nm to 742nm), what's the real world difference in sensitivity (ie, how much gain needed to compensate to get the same histogram)?

290MM and 290MC with the same filter and question as above?

 

This is leading into questions on sensors like the 462MC and the 533MC which are more sensitive than color sensors before. So I'm curious how they compare to a mono sensor for narrowband filter captures.

 

Thanks!

 

Very best,

 

One point which may worth to highlight here is that IR can pass through the bayer matrix of 462C so actually it can be considered as a mono at IR wavelength... the matric you are looking for comparing sensitivity of modern "colour" sensors is required to cater for this scenario.


  • MalVeauX likes this

#12 MalVeauX

MalVeauX

    Cosmos

  • *****
  • topic starter
  • Posts: 7,509
  • Joined: 25 Feb 2016
  • Loc: Florida

Posted 05 July 2020 - 10:55 AM

One point which may worth to highlight here is that IR can pass through the bayer matrix of 462C so actually it can be considered as a mono at IR wavelength... the matric you are looking for comparing sensitivity of modern "colour" sensors is required to cater for this scenario.

Thanks,

 

I realize this seems simple for several people but it's so hard to find good examples of real world results.

 

I'm mainly trying to see if I take a 533MC sensor for example and sample at 5ms at F10 and the only floating variable if I were to compare it to the same sensor, but mono, I really would like to know the real world gain difference that would be needed to achieve the same (similar) histogram in one wavelength (let's say 656nm for simplicity), using it as a narrowband sensor with single wavelength imaging through narrowband filters. I realize there is not a mono version of this sensor available at this time, but it's tech is heavily based on the IMX183 sensor from what I understand, and there is the 183MC and 183MM so I'm very curious if anyone has real world examples of this. I get that it cannot be calculated and be accurate for a real world result, more an actual use needs to occur on the same system with equal parameters and components but simply two different sensors.

 

I'm mainly just trying to see how much gain it would cost me, and noise/dynamic range, to use a modern color sensor, instead of a mono sensor for narrowband imaging. My primary application is solar but it's the same essentially for planets or the moon from a narrowband stand point and result, other than, obviously the sun is much brighter and so transmission is totally different. I am used to using zero gain for solar. I'm curious though if using a modern color sensor that is fairly sensitive to red and IR in general, like the 533MC, would cost me a lot of gain due to the lowered end result transmission through the matrix, or if it's negligible.

 

There's of course lots of people using both color & mono. I'm just looking to see the difference in real world results. If it costs me 60~80 gain, that's not even worth losing sleep over. I'm just curious what it actually is.

 

I currently use:

 

290 Mono

IMX183 Mono

IMX174 Mono

224MC Color

 

I'm considering a new sensor that is 1" or larger, such as 533MC, 1600MM or MC, as I want the larger FOV potential and I want the larger pixels (IMX183 has issues with ultra-narrowband just like IMX178 does). So I'm weighing color options as well as mono options simply because modern color sensors are not the same as before and its interesting that a lot of the latest releases in sensors have been cooled color stuff meant for narrowband imaging and no mono version.

 

Very best,


Edited by MalVeauX, 05 July 2020 - 10:57 AM.


#13 MalVeauX

MalVeauX

    Cosmos

  • *****
  • topic starter
  • Posts: 7,509
  • Joined: 25 Feb 2016
  • Loc: Florida

Posted 07 July 2020 - 08:41 AM

Hrm,

 

So, here's an example that was just posted in another thread, doing something similar, which is interesting:

 

https://www.cloudyni...ars/?p=10313737

 

This is the 462MC full spectrum capture values vs 290MM with Red filter capture values that were directly compared in real world on Mars. Parameters were the same (same pixel size, same focal-ratio, same 1.6ms exposure time, same FPS, same region of interest size, etc, the only floating variable was gain). While they're not the same sensor, it's still comparable in many ways, as the 462MC is supposed to be a more sensitive and efficient sensor than older sensor tech, especially color. 462MC used 275 gain (45%). 290MM used 304 gain (50%).

 

Again I realize this is not a one to one comparison, it's not the same sensor. But this is the kind of interesting comparisons I'm looking for and am interested in as newer cameras with newer sensors, color or not, are coming out rapidly and color sensors are dropping a lot more commonly right now with very interesting sensor sizes, pixel sizes, efficiency and cooling. The above difference in exposure values is insignificant to me, which is very interesting, because again, theory suggests it should be different, significantly differently, but it usually is not, which is why I'm looking for more comparisons.

 

It would be nice to compare a 290MC to 290MM or 183MC to 183MM or 1600MC to 1600MM, so I'm hoping someone eventually sees this and can show real world data.

 

Very best,




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics