I have been trying to stack and stretch data from my Canon R6 and 80D while maintaining "accurate" color, and am seeking some clarification/advice on certain steps.
I have read through relevant material, specifically:
DSLR Processing - The Missing Matrix
Nightscape and Astrophotography Image Processing Basic Work Flow (R Clark) (+other pages there)
M31 Andromeda Galaxy in natural colour
The absolutely necessary keys steps seem to be:
- Calibration (even if it's just a bias subtraction)
- Demosaicing
- Stacking
- White balance
- Color correction matrix
- Color-preserving stretching
- sRGB tone response curve
I have used Siril, Astro Pixel Processor, and more recently Pixinsight. Considering now some of the key steps:
Color Correction Matrix
I understand that multiplying the debayered data with the CCM is a must-have step. So far I have managed to do this in i) Astro Pixel Processor (on by default), ii) converting with Rawtherapee to 16 bit TIFFs by using a custom linear tone curve (so that the data remain linear after conversion), iii) Pixelmath in Siril or PI.
Options (i) and (ii) will apply the CCM to each photo before stacking. Option (iii) could be done in the lights or in the stacked image. My understanding is that so long the data are linear, it should not matter whether the CCM is applied before or after stacking.
I also understand that using a "normal" RAW converter will not work, as the output data will be stretched by the tone response curve.
Finding (or measuring) the CCM is a dark art, we hope DxO has the matrix.
White Balance Matrix/Vector
Another matrix multiplication. The simplest approach is to use the Daylight White Balance numbers for the camera. I have also seen running the photometry color calibration will also have a similar effect. I understand what the tools are aiming to do (matrix multiplications so that the stars RGB colors match what's on a reference database), however it's nor clear to me whether they apply a white balance type multiplication, CCM-type multiplication, or a combination. Any ideas?
Color-Preserving Stretch
Again, I (think I) get the concept: stretch by multiplying the R,G,B values of a pixel by the *same* number (that depends on the luminance L of that pixel), as opposed to scaling the R,G,B channels separately. I have seen Pixinsight's ArcsinhStretch that conforms to that, using a function ~asinh(b*L)/(asinh(b)*L) . I presume it can be coded into pixelmath too. rnc-color-stretch uses (I think) a different function, L^(1/p). Are there other ways? Is the GeneralizedHyperbolicTransform compatible with this constraint?
Tone reponse curve - help!
This is my weakest point and I could use some education. There is usually a step to (manually) apply the tone response curve, e.g. for sRGB. I understand this is what a normal imaging software (or OS) does to adjust the image stored pixels before sending them to the monitor for display. Why is this step necessary? Doesn't the software (Pixinsight, Siril, Photoshop) apply the (let's say sRGB) tone response curve to display the stretched data on the screen, throughout this whole process? Then, when exporting the image to e.g. JPG, aren't these values baked into to the image file (along with the sRGB profile used)?
Edited by timaras, 23 April 2025 - 08:31 AM.