In the spirit of the title of this topic, I wanted to summarize a 10-step workflow (based on the earlier discussions here) to get to a "consistent" color for a stacked frame, i.e. apply the camera-specific white balance and color conversion matrices. The goal is to reach a stage where the color is in line with a pleasant/accurate terrestrial image, if the same process was followed.
This is meant to be done after stacking (and potentially background extracted) but still in a linear stage for the data.
There is not much new here in terms of process, but I personally could not easily find all the details and nuances together in one place to follow. Tested in a Canon R6. It might seem like a lot, but it only needs to be done once per camera, ever.
This can be done in 5' in e.g. Astro Pixel Processor, which includes camera specific white balance and color space conversions. The issues I am trying to further improve on are:
- Not be tied to AstroPP stacking tools. They are good but I wanted to be able to incorporate a more generic workflow, e.g. for Siril or Pixinsight.
- Not be limited to AstroPP or DxO camera matrices. For example, DxO often supplies camera matrices for D50 illuminant, which is warmer compared to D65 (daylight noon), hence the process often produces bluer hues. The workflow extracts the info directly from the raw files.
Tools needed: exif tool, Adobe DNG converter, chatgpt (for calculations), pixelmath (e.g. in Siril) for the conversions.
1. Take a RAW camera file (e.g. pic.CR3) and convert to Adobe DNG.
2. Run exiftool on the pic.DNG file and save the output to a text file.
For example in a Mac terminal the command is exifTool -a -u -g1 pic.DNG > pic_metadata.txt
3. Parse the txt file and extract the parameters below (you can eyeball them, ask chatgpt to retrieve them, or search for the terms in a text editor). The values are examples of what the results are for my R6.
Calibration Illuminant 2: D65
Color Matrix 2 : 0.8293 -0.1611 -0.1132 -0.4759 1.2711 0.2275 -0.1013 0.2415 0.5509
Camera Calibration 2 : 1.0037 0 0 0 1 0 0 0 1.0242
Analog Balance : 1 1 1
As Shot Neutral : 0.530295 1 0.641604
The first 2 items are 3x3 matrices (the rows in sequence) for D65. ColorMatrix2 converts from the XYZ color space to the reference camera RGB. CameraCalibration2 is a minor tweak that converts from the reference camera color space to the color space of the specific camera in your hands (raw data).
We will also need the sRGB2XYZ matrix that converts from sRGB to XYZ color spaces for D65:
sRGB2XYZ = [0.4124564 0.3575761 0.1804375; 0.2126729 0.7151522 0.0721750; 0.0193339 0.1191920 0.9503041];
4. Calculate the White Balance diagonal matrix [wb1 wb2 wb3] = 1/AsShotNeutral * AnalogBalance (in this case, the 3 elements are wb1 = (1/0.530295)*1, wb2 = (1/1)*1, wb3 = (1/0.641604)*1 ).
5. Use pixelmath to white balance the original image, which should go from R,G,B to R*wb1, G*wb2, B*wb3.
If you are using a terrestrial image, the pedestal must be subtracted (e.g. with a bias frame) and the data debayered before this step.
6. Calculate the (not normalized) color conversion matrix from sRGB to camera color space:
sRGB2cam = CameraCalibration2*ColorMatrix2*sRGB2XYZ
Note that the above is a matrix multiplication, the order is important and is best done by suitable software (pixelmath can do this, online tools can do this, I prefer chatgpt).
7. Normalise the sRGB2cam matrix rows by diving the elements of each row by the sum of the elements of that row.
This ensures that there is no further white shift. The new matrix can be called e.g. sRGB2cam_norm.
For example, if the elements of the first row are [a b c], the normalized row will be [a/(a+b+c) b/(a+b+c) c/(a+b+c)]. Repeat for the other two rows.
8. Invert the matrix (same tools as above) so that instead of converting sRGB to camera RGB, it converts camera data to sRGB.
This produces cam2sRGB_norm = inverse(sRGB2cam_norm). For the R6 example here, the elements of the matrix are:
M11 = +1.74; M12 = -0.84; M13 = +0.10;
M21 = -0.17; M22 = +1.65; M23 = -0.47;
M31 = +0.00; M32 = -0.60; M33 = +1.60;
As a comparison, DxO supplies the matrix for the R6 at D50:
2.12 -1.24 0.12;
-0.21 1.7 -0.49;
0.04 -0.65 1.61;
9. Multiply the white-balanced pixels with the matrix.
In pixinsight, this looks like:
Red: $T[0]*M11 + $T[1]*M12 + $T[2]*M13
Green: $T[0]*M21 + $T[1]*M22 + $T[2]*M23
Blue: $T[0]*M31 + $T[1]*M32 + $T[2]*M33
10. Finally, assign a linear sRGB profile to the image, e.g. sRGB-elle-V4-g10.icc from Elle Stone, to ensure the data are interpreted as linear.
...
Once processing is done, convert (not assign!) the image to a normal sRGB profile for export.
Bonus points if you have a linear working profile, your edits will be ever so more accurate in linear space.
As a comparison, below is the image developed in Pixinsight using the DxO matrices. the neutral grey square is (R G B) = (0.42 0.50 0.55) after converting to gamma-stretched sRGB.
The same image developed with the above steps yield (R G B) = (0.50 0.51 0.53).
Original debayered image (
link to RAW file):
Edited by timaras, 08 May 2025 - 05:47 PM.