Hi Fellow Imagers,
This post will be another marathon length explanation of some work I have done to attempt to dispel a lot of what I consider misconceptions in the ongoing arguments over “which is better – Mono or OSC.” I’ll start with a heads up; there is no clear winner in all scenarios. It all depends on your purposes and, surprisingly, how you shoot with each camera type. In many cases, Monochrome plus filters will be the winner far and away. However, it may come as a surprise to some that OSC has some traits that lead it to outperform the Monochrome approach in some scenarios.
I am going to try my best to avoid the typical “hand waving” posts that characterize this ever-on-going battle of opinions. Instead, I will present here the method, logic, and tools, I used to look at the question from many different angles. I am presenting this information in a step by step manner much like a tutorial so that, hopefully, all can follow my reasoning.
This analysis was started several months back in the Cloudy Nights thread “Dedicated LRGB v. OSC Filter Transmission”. It was recently resurfaced in the on-going thread “Matching a Camera to my Scope.”
Feel free to point out any logic errors I have made or question my assumptions which underlie my methodology. It is quite possible I have overlooked some key factors that come into play. There may also be bugs or errors in the spreadsheet I have built to do all the calculations. However, please make these comments in a way that I can change the underlying mathematical operation of the spreadsheet tool used for the computations involved.
Assumptions:
The following are key assumptions in my calculations:
- The sky background is spectrally neutral. The sky is some shade of pure gray / black / white.
For the portion of the spectrum from 400 nm to 700 nm, an equal number of photons (rate) fall on the telescope aperture during each unit of time regardless of wavelength. This may not be completely valid under significant (colored) light pollution but represents what I think is a reasonable starting point. An arbitrary set of units is chosen so that the measurements are given in photons per second per square arc-second of sky per nanometer of wavelength. These can be later scaled if desired. I ignore any scaling since the choice of photon rate was arbitrary.
- The wavelength response curves published by sensor and filter vendors are reasonably accurate.
I have digitized the sensor response (QE) curves for a number of different cameras for inclusion in the spreadsheet. I have also done the same for a variety of filters commonly used in astrophotography. In both cases, the spreadsheet is structured in a way that it is easy to add more sensor and filter data. My digitization of the plots is subject to the underlying accuracy of those published graphical representations.
- All of my evaluations use a “Quad” or SuperPixel group of pixels so that direct comparisons between the two camera types can be made more easily. For an OSC camera, I look at one red, two green, and one blue pixel typical of RGGB Bayer Matrix cameras. This Quad is the same as would be used for a “true color” binned SuperPixel DeBayer operation without color interpolation. For Monochrome sensors, I look at a two by two group of pixels (without Bayer filters of course.) This allows for direct comparison between the two sensor types by looking at photons captured by the respective Quads on each sensor.
Methodology:
Here is the general methodology used by the spreadsheet:
- Grab the response curve for the selected sensor. This is the same as an Absolute QE curve published for the sensor. The data is digitized in 5 nm increments between 400 nm and 700 nm.
- Grab the response (transmission) curve for the filter selected for evaluation. For a filter, this is the transmission response curve published by the vendor.
- At each 5 nm wavelength increment, multiply the sensor’s response at that wavelength by the filter’s response at that same wavelength. Repeat this operation for the full wavelength band-pass under consideration. (This is normally the full range of 400 nm to 700 nm although the spreadsheet can be forced to consider a subset.)
- Plot the resulting product of sensor response times filter transmission for each 5 nm wavelength increment.
- Calculate the accumulated response for each pixel in the Quad. Since I have assumed that each pixel gets 1 photon for each wavelength emitted by the sky, the pixel accumulates 5 photons from the sky for each 5 nm wavelength from 400 nm to 700 nm. This is, of course, modulated by the calculated combined response that was calculated in the previous step.
For a Monochrome camera, each pixel in the Quad gets the same number of photons since the filter is external to the camera. For an OSC camera, each pixel gets a different number of photons determined by the response of that respective pixel’s Bayer Matrix transmission.
In both cases, once the pixel’s photons are counted on a per-wavelength increment basis, they are then summed (numerically integrated) for all wavelengths under consideration.
- Finally the pass-band total photon counts are displayed in the spreadsheet. This area allows me to start the evaluation of how each sensor type responded to the light which entered the telescope aperture.
- To do the comparisons, I use a separate external “scratch” spreadsheet. I select the scenario in the main calculation spreadsheet and copy the results to the separate external sheet. I then run additional calculations and again copy the results to the external sheet. Finally, I am able to make the actual comparison of sensors in the external sheet. I will show an example of this later here.
Imaging Methodologies:
I think part of the confusion regarding the often raging Monochrome vs One-Shot-Color camera debate comes from differing ways of using each sensor type. There are distinct differences between using an OSC and a Monochrome camera when one considers how the imaging is done. With Monochrome cameras, some of the basic methods used are:
- LRGB:
Here the main differences are in how the photographer decides to apportion camera / filter time.
He may choose to take 1 hour using each filter ending up with a 4 hour session of L, R, G, and B filtered images.
He may choose to shoot predominantly L and a lesser amount of RGB color. He might end up with 3 hours of L and 20 minutes of each R, G, and B data.
There are infinite varieties of shooting various ratios of L to RGB data. All will differ in the amount of photons gathered during the session.
- RGB:
In this case, the photographer chooses to shoot only the color components. He may or may not choose to create an artificial L layer from the RGB data during later post-processing. Shooting only through color filters also changes the total number of photons that are gathered by the sensor / filter combination.
- LRGB or RGB Plus Narrow-Band
Either of the previous methods can be supplemented with additional data gathered through a narrow-band filter. The extra data is often combined in post-processing only after an RGB processed image is created.
- One-Shot-Color:
There is only one way to do this in broad-band imaging session. You end up with the same amount of imaging time for each of the four pixels in a Quad. The photons gathered will differ between pixels depending on the Bayer Matrix transmission for each color.
OSC Data is sometimes supplemented with narrow-band or multi-band data in much the same way as LRGB+NB.
Surprisingly to some possibly, the exact imaging methodology used influences the result in terms of photons gathered in a Quad during an imaging session. As might be expected, the greater the percentage use of L for Monochrome cameras, the more photons that can be captured. This happens, of course, because L from a Monochrome camera has no filtering (modulation) of the response curve at all.
That's it for this post. Another will follow with a worked example of some comparisons between Monochrome and One-Shot-Color cameras of the same family.
John
Edited by jdupton, 01 November 2019 - 11:30 PM.