This may sound a little strange. I am interested in analyzing the actual spatial resolution degradation associated with the deBayering process. My plan is to start with a high resolution color image, extract the R,G and B channel images, apply the proper Bayer pattern to each of the extracted images, then use MergeCFA to form a synthetic Bayer pattern image. Are you following this? The goal is to create a deBayered image that can be directly compared to the original color image to measure the relative resolution. I think I know how to do all of the steps I outlined except applying the Bayer patterns to the extracted color channel images. Does anyone know of an (easy?) way to apply a checkerboard pattern to an image, preferably using processes in Pix Insight?

Synthetic Bayer Pattern?
#1
Posted 08 April 2021 - 12:47 PM
- AstroAug likes this
#2
Posted 08 April 2021 - 01:10 PM
Zebenelgenubi,
I must not be following where you are having the problem.
The process you outlined should work as you described. The MergeCFA process does the reconstruction required of the Bayer matrix. You only need to assign your red image to CFA0, assign the green image to both CFA1 and CFA2, and the blue image to CFA3. Then the mergeCFA process will give you an RGGB unDeBaryered image. Did that not work for you?
The only complication is that the starting R, G, and B images need to be half the dimensions of the desired CFA image.
John
- Zebenelgenubi likes this
#3
Posted 08 April 2021 - 01:49 PM
If I understand what you are trying to do, another way of making this comparison ( works in Astroart, which you download a free trial) might be to load the image but specify not to deBayer on load. So you just get a gray scale image. Then re-load the image but deBayer, then you get an image the same size but 'colour', then extract L...then you have another gray scale image, exactly the same size. There might be scaling function between the two gray images (depending maybe on color balance chosen??), maybe just find value at a certain pixel (unsaturated!) on both and scale both. Or just divide one image by the other (pixel arithmetic).
My understanding of deBayering is that the algorithm takes the blue pixel values (for example) either side of a red (for example) and 'interpolates' between them to give a calculated 'blue' value at the red pixel location. Same with green, then uses the 'real' red value and the two calculated G and B values to give us the RGB. You can usually choose the 'interpolation' formula, although in some DSLR cameras I believe even the apparently 'Bayered' RAW images have already been deBayered internally, for eg noise reduction, colour balance. Anybody know more of this?
Steve
Edited by splash, 08 April 2021 - 02:00 PM.
- Zebenelgenubi likes this
#4
Posted 08 April 2021 - 03:55 PM
Zebenelgenubi,
I must not be following where you are having the problem.
The process you outlined should work as you described. The MergeCFA process does the reconstruction required of the Bayer matrix. You only need to assign your red image to CFA0, assign the green image to both CFA1 and CFA2, and the blue image to CFA3. Then the mergeCFA process will give you an RGGB unDeBaryered image. Did that not work for you?
The only complication is that the starting R, G, and B images need to be half the dimensions of the desired CFA image.
John
Yes I found that if I follow the process I outlined, the resulting deBayered image was twice the size of the original color image. I simply applied an integer down sample to get back to the original image size. That seemed like a cheat step that I want to avoid. I had forgotten that the images used in the MergeCFA need to be half the dimensions of the original image. I will try using the integer down sample on the extracted color image. Thanks for your insightful help.
#5
Posted 08 April 2021 - 04:05 PM
I will try using the integer down sample on the extracted color image. Thanks for your insightful help.
I think you just answered your original question. The de-bayering process is the equivalent taking an image of 1/2 the resoulution, and up-sampling (by interpolation) to get the full original resolution.
- Zebenelgenubi likes this
#6
Posted 08 April 2021 - 05:38 PM
I've done this many times in my experimentations. No channel extraction or MergeCFA required. Here's a PixelMath expression that can be applied directly to a full colour image:
v=0;
v+=iif( x()%2 == 0 && y()%2 == 0, $T[0], 0 );
v+=iif( x()%2 == 1 && y()%2 == 0, $T[1], 0 );
v+=iif( x()%2 == 0 && y()%2 == 1, $T[1], 0 );
v+=iif( x()%2 == 1 && y()%2 == 1, $T[2], 0 );
v
Just add 'v' to the symbols before executing and convert to grayscale after executing.
To debayer back to a colour image, the bayer pattern is "RGGB"
Mark
Edited by sharkmelley, 08 April 2021 - 05:57 PM.
- Zebenelgenubi likes this
#7
Posted 08 April 2021 - 06:34 PM
I've done this many times in my experimentations. No channel extraction or MergeCFA required. Here's a PixelMath expression that can be applied directly to a full colour image:
v=0;
v+=iif( x()%2 == 0 && y()%2 == 0, $T[0], 0 );
v+=iif( x()%2 == 1 && y()%2 == 0, $T[1], 0 );
v+=iif( x()%2 == 0 && y()%2 == 1, $T[1], 0 );
v+=iif( x()%2 == 1 && y()%2 == 1, $T[2], 0 );
v
Just add 'v' to the symbols before executing and convert to grayscale after executing.
To debayer back to a colour image, the bayer pattern is "RGGB"
Mark
Thanks Mark ( I think). I am not sure what you mean by "Just add 'v' to the symbols..." Can you explain that a bit further?
#8
Posted 08 April 2021 - 06:38 PM
Thanks Mark ( I think). I am not sure what you mean by "Just add 'v' to the symbols..." Can you explain that a bit further?
PixelMath has a "Symbols" tab. Just type "v" into the symbols box. Otherwise you get the error "Invalid symbol identifier" when executing the script.
- Zebenelgenubi likes this
#9
Posted 08 April 2021 - 06:42 PM
I think you just answered your original question. The de-bayering process is the equivalent taking an image of 1/2 the resoulution, and up-sampling (by interpolation) to get the full original resolution.
Gabe that is obviously true for deBayering algorithms that create super pixels without interpolation. It isn't obvious to me that some of the interpolation algorithms (VNG for example) would actually half the resolution of the image relative to the camera pixel size. That's the point of my experiment.
#10
Posted 08 April 2021 - 06:43 PM
PixelMath has a "Symbols" tab. Just type "v" into the symbols box. Otherwise you get the error "Invalid symbol identifier" when executing the script.
Duh! Sorry for the stupid question. Thank you.
#11
Posted 09 April 2021 - 11:05 AM
If anyone is interested, here are some preliminary results of my deBayer resolution analysis. First pic is the original RGB image and the Synthetic CFA ( not deBayer) image (thanks to help from John Upton and Mark). Second pic shows details of the two images. Third pic compares the results of three deBayer algorithms using the PixInsight deBayer process. Looks to me that the degradation in image resolution is small with the Bilinear and VNG algorithms. A bit larger with the SuperPixel algorithm. I am still investigating. Comments?
Edited by Zebenelgenubi, 09 April 2021 - 11:07 AM.
- jdupton likes this
#12
Posted 09 April 2021 - 12:41 PM
What happens if yo up-scale the SuperPixel image? This should yield similar results to the VNG and BiLinear.
#13
Posted 09 April 2021 - 12:48 PM
#14
Posted 09 April 2021 - 03:48 PM
What happens if yo up-scale the SuperPixel image? This should yield similar results to the VNG and BiLinear.
The SuperPixel algorithm approach resamples the R, G, and B CFA images to form a RGB superpixel. The resulting image is one half the size of the original image so I did an integer resample to rescale that image to match the original image and the images from the other deBayer algorithms.
Edited by Zebenelgenubi, 09 April 2021 - 03:56 PM.
#15
Posted 09 April 2021 - 03:54 PM
Your original image is very well sampled, probably even over-sampled. In such a case, Bayer sampling won’t damage the resolution much. A real test would be on critically sampled or even under-sampled images. You may down-sample your original image by 2x or even 3x, and repeat the same operations. Then I believe you will see some damage done by Bayer sampling.
If the original image were under sampled, then I would think that the Superpixel approach would give the best resolution result since it naturally down samples the CFA channels. The interpolation algorithms work best with well sampled data. In either case the resolution loss in deBayering is less than or equal to the resolution loss in 2X2 down sampling (binning).
#16
Posted 09 April 2021 - 07:46 PM
If the original image were under sampled, then I would think that the Superpixel approach would give the best resolution result since it naturally down samples the CFA channels. The interpolation algorithms work best with well sampled data. In either case the resolution loss in deBayering is less than or equal to the resolution loss in 2X2 down sampling (binning).
Maybe because I just woke up in the morning, what you said doesn’t make sense to me. Superpixel bins two G pixels into one without interpolation. It's the one with the largest resolution penalty. And when the original images is under-sampled, Bayer sampling loses information much more than the case when the original image is well sampled.
Edited by whwang, 09 April 2021 - 07:59 PM.
#17
Posted 09 April 2021 - 08:34 PM
Maybe because I just woke up in the morning, what you said doesn’t make sense to me. Superpixel bins two G pixels into one without interpolation. It's the one with the largest resolution penalty. And when the original images is under-sampled, Bayer sampling loses information much more than the case when the original image is well sampled.
My thinking was bad as far as SuperPixel improvement goes. You are correct sir.