•

DSLR Processing - The Missing Matrix

86 replies to this topic

#1 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 02 March 2016 - 03:17 PM

Everyone knows how to process their DLSR images don't they?

Using traditional techniques you calibrate your raws with darks, flats and bias frames. You demosaic them into colour then stack them.  You then apply your white balance and then some kind of data stretching such as Photoshop curves.  Great, you've finished!  Or have you?

If I apply the same sequence to a normal terrestrial image this is an example of what I get:

However the JPG that came straight out of the camera looks like this, which is a far more lifelike result:

If I process the RAW using Photoshop (with Adobe Camera Raw) then I get something almost identical to the in-camera JPG.

So why has the usual astro-processing sequence left a dull lifeless result?

The answer is a missing matrix!

Such a matrix can be often be found at DXOMark - e.g. for the Canon 600D:  http://www.dxomark.c...s#measuretabs-7

So for the 600D the colour matrix is:

2.12  -1.28  0.16

-0.24  1.63  -0.38

0.04  -0.69  1.65

This must be applied to the white balanced data to transform the image into something that will display correctly on your display device.

Applying this matrix to the above lifeless image gives me this:

If you're not familiar with matrix maths then it works like this.  If a pixel in your white balanced image has values r,g,b then they must be transformed to RGB as follows:

R = 1.879574 * r   -  1.03263 * g  + 0.153055 * b

G =  -0.21962 * r  + 1.715146 * g -  0.49553 * b

B = 0.006956 * r   -  0.51487 * g  + 1.507914 * b

Those numbers are for the Canon 600D, so you need to substitute the numbers given for your own camera.  Also, this transformation should be done before you apply your stretch.  I manually applied a 1.6 gamma stretch to the images above.  The ACR stretch is slightly different.

ACR is applying a very similar matrix automatically when it converts the image from a raw file.

The only problem I've hit so far is that this matrix transformation doesn't seem to work well for H-alpha modified cameras - the red becomes too overpowering.

Enjoy!

Mark

Edited by sharkmelley, 02 March 2016 - 03:26 PM.

• Jim Waters, Charlespl, spacemech and 1 other like this

#2 bobzeq25

bobzeq25

ISS

• Posts: 34,132
• Joined: 27 Oct 2014

Posted 03 March 2016 - 04:18 AM

An alternative is to not use white balance at all, and adjust color in your processing program, since you have the RAW data in separate rgb channels.  I find that different targets, different exposures, different stretches, require different color balances.  For me one size does not fit all.

There are other methods, such as basing color on g2v stars = white.  Getting there still require different settings for different images processed differently.

Edited by bobzeq25, 03 March 2016 - 04:21 AM.

• Lightning and Jon Rista like this

#3 Jerry Lodriguss

Jerry Lodriguss

Voyager 1

• Posts: 7,798
• Joined: 19 Jul 2008
• Loc: Voorhees, NJ

Posted 03 March 2016 - 01:56 PM

Everyone knows how to process their DLSR images don't they?

ACR is applying a very similar matrix automatically when it converts the image from a raw file.

Hi Mark,

If you shoot a custom white balance with a modified camera it will get you as close as you are going to get if your processing program honors the white balance information such as with the "as shot" setting in ACR.

But there are more adjustments than a single transform being applied to the JPEG depending on a bunch of settings in the camera such as picture style, contrast, saturation etc.

It also depends on the white point and the color space you want to convert the raw data into.

Matching the JPEG has a lot of variables.

ACR lets you muck about changing a bunch of variables that will change the color also.

Jerry

• Jon Rista likes this

#4 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 03 March 2016 - 02:14 PM

An alternative is to not use white balance at all, and adjust color in your processing program, since you have the RAW data in separate rgb channels.  I find that different targets, different exposures, different stretches, require different color balances.  For me one size does not fit all.

There are other methods, such as basing color on g2v stars = white.  Getting there still require different settings for different images processed differently.

I have been using G2V calibration with eXcalibrator and PI's ColorCalibration tool lately myself. It took some time to figure out the right settings for the NOMAD color compensation settings in eXcal, but now that I have, I like the results. This image demonstrates pretty accurate color with G2V calibration, and it doesn't lack for saturation:

• jimsmith, Lightning, sharkmelley and 3 others like this

#5 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 04 March 2016 - 01:09 AM

That's a great Seagull Jon, the best I've ever seen.

When we talk about performing a G2V calibration, the only thing we are doing is to scale the R, G, B channels to give the correct white balance.  But what I'm talking about is the possibility of performing a colour calibrated workflow but without using the traditional tools such as Adobe Colour Raw.

The problem is that the data in the RGB channels coming from the DSLR camera are not pure. See this response chart for my modded A7S (taken using white cloud as a reference, hence the valleys corresponding to the Fraunhofer lines):

What this tells us is that our red channel contains some green and blue, our green channel contains some red and blue etc.  This effect is quite unlike the RGB filters typically used for data acquisition on a mono astro-CCD. The result is that displaying camera RGB image data on a screen produces a warping of the colours even after the white balance is set correctly.  The purpose of the colour transformation matrix (unique to each camera) is to better match the camera colours to those seen on a standard display device.  So it maps the camera's unique colour space to sRGB or CIE XYX.  Jim Kasson describes the kind of process employed to generate such a matrix in his recent blogs:

http://blog.kasson.com/?p=12486

http://blog.kasson.com/?p=12489

If I plot the effects of the 600D colour matrix (ColorMatrix2 taken from the header of a 600D Adobe digital negative file) I see the following, plotted in standard CIE xy space:

The effect is certainly to increase the saturation of the colours but it also applies some twisting to the hues.  It does this to better match the colours in a standard colour chart.

Ultimately, this may not have a lot of use in astro image processing but I've always been curious to know what was going on.  It has always bugged me that when I process a photo of the children walking the dog in the woods by using my traditional astro processing sequence, the colours never came out right.  Now I know why this is.   Conversely, I knew that the colours of the non G2V stars in my processed astro-images must also be "wrong".

It's probably something that most of you knew all along - I admit I'm often a bit slow on the uptake, sometimes. But now I've found the "missing matrix" I'm a lot happier.   It wasn't missing at all - it's just that I wasn't aware of its existence!

So my next step is to buy a colour chart and attempt to generate a compromise colour transformation matrix for my modified camera.  This must be a similar process to what Nikon had to do with their D810a camera, to make terrestrial daylight photos look reasonable.

Mark

Edited by sharkmelley, 04 March 2016 - 01:24 AM.

• Jon Rista likes this

#6 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 02:53 AM

First, thank you for the compliment. Very much!

Mark, I would be interested in seeing how applying the necessary matrix to a DSLR integration affects the subsequent workflow.

Something I have noticed with both G2V calibration as well as standard PI ColorCalibration is the resulting images usually contain a greenish color cast. With the PI ColorCalibration routine, there is a true greenish cast, almost like greenish haze over the entire image. With G2V (which also uses ColorCalibration, but white point is in manual mode), there is also a greenish cast, but it presents more as a green noise in the data than a haze or fog over the entire image.

This is normally corrected with SCNR (Subtactive Chromatic Noise Reduction). I used it in my Seagull image above, which nuked the green noise and left me with a pretty even distribution of color noise throughout all three channels. The thing about SCNR is...it is kind of a guesswork tool. I have to experiment with the amount to get what I FEEL is the right amount of green noise reduction. Sometimes it is not so easy to get it right, and particularly with the color cast of standard PI CC, it often doesn't really do enough.

I wonder if all of that could be avoided by applying this matrix transformation early on in the process. Say just after integration, before any other processing, including before color calibration? I ask because one of the things I usually do with my data, the first step after integrating, is to run Linear Fit on my data. I do this for two reasons. First because my data usually has a significant color cast, from either airglow at my dark site, light pollution in my back yard, or the use of an LP filter which totally skews things across the whole rainbow (I've had deep red, yellow, orange, bright green, deep blue and purple images before linear fit O_o.) Fitting entirely eliminates the cast and gives me more neutral and workable data. Second, because fitting the data in each channel to each other gives me something expected and deterministic to work with. My signals all generally look the same in each image after linear fit, and the signals all generally behave the same after it as well. This makes it a lot easier to apply the same general workflow to each image with only minor tweaks to accommodate the unique traits of each integration.

I wonder if your matrix transformation would be an alternative way to start? I am not sure if it would be as deterministic...and, I am also not sure if it would really be capable of dealing with the strong color casts that I usually deal with. Maybe it would be, though? I think a G2V routine could still be used, which should rebalance the color, but at that point the non-pure mixing of colors due to the design of the sensor CFA would have been corrected...and maybe that would result in a more accurate G2V calibration? I did have to tweak the NOMAD correction factors for a while across many images to get it to generate results that I felt looked more accurate than PixInsights with CC. I wonder if I could stick with the defaults if this matrix was applied to the integration.

Which brings me to another question...does integrating the data mess with the matrix transformation? Does it have to be run on the individual calibrated subs before registration and integration?

#7 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 04 March 2016 - 02:08 PM

Lots of questions I don't know the answers to.

The one thing I do know is that the matrix transformation does not mess with the data.  It creates new R,G,B channels by combining the old R,G,B channels in a completely linear manner.  So this transformation can be done any time after demosaicing the data but before any non-linear stretching.  So the transformation could be done to each frame individually before stacking, or done after stacking or done after subtraction of light pollution.

Mark

#8 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 02:30 PM

So, I've started experimenting with this. I applied your transformation with PixelMath by first splitting the original image into RGB channels. I renamed each Seagull_R/G/B according to the channel. I then ran the PixelMath as so:

This is the resulting image (linear, stretched with STF):

#9 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 02:30 PM

And for reference, this is the original image, stretched with the same STF:

It should be noted that this was done AFTER an initial linear fit and DBE. I am going to load my original integration before any processing, and try the same procedure, and see what I get.

Edited by Jon Rista, 04 March 2016 - 02:32 PM.

#10 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 02:37 PM

The original integration, linked STF, before applying the matrix:

And after applying the matrix, identical STF:

#11 calypsob

calypsob

Cosmos

• Posts: 8,800
• Joined: 20 Apr 2013
• Loc: Virginia

Posted 04 March 2016 - 04:32 PM

An alternative is to not use white balance at all, and adjust color in your processing program, since you have the RAW data in separate rgb channels.  I find that different targets, different exposures, different stretches, require different color balances.  For me one size does not fit all.

There are other methods, such as basing color on g2v stars = white.  Getting there still require different settings for different images processed differently.

I have been using G2V calibration with eXcalibrator and PI's ColorCalibration tool lately myself. It took some time to figure out the right settings for the NOMAD color compensation settings in eXcal, but now that I have, I like the results. This image demonstrates pretty accurate color with G2V calibration, and it doesn't lack for saturation:

John can g2v be done in ps? I would like to try a more accurate form of calibration

#12 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 04:39 PM

@Calypsob: Yes, it should be. Search for eXcalibrator, and read the docs. All it really does is give you RGB adjustments that you apply to the white point. I believe it explains how with PS.

@Mark: I have reprocessed my image by extracting the processing container from my original images history (ProcessContainer is one of the more awesome features of PI! ;P) These are the steps reapplied:

This is the result on the image with the matrix transformation:

Personally, I feel the color is much too saturated now, and the data is more red-shifted. I did not re-run the G2V calibration on the data after running the matrix, so my guess is the color calibration is probably off now. However, it seems quite clear that applying the matrix is certainly one way of boosting saturation, beyond what I've ever been able to achieve in the past with PI.

Edited by Jon Rista, 04 March 2016 - 04:40 PM.

#13 entilza

entilza

Soyuz

• Posts: 3,826
• Joined: 06 Oct 2014

Posted 04 March 2016 - 05:44 PM

Jon, weren't those matrix numbers for a 600D.

Mark, where does the 1.879574 number come from? I see the 2.12 from the dxomark website.

Thanks.

#14 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 06:30 PM

Martin, I am reprocessing with the 5D III matrix, however that enhances red even more (which may be why Mark reduced the red factor in the red term down to 1.879574). It will take a while to re-run the processing container, but I'll share the final result when it's done.

I am also going to tweak the processing container contents to remove a lot of the saturation steps. With the matrix, the color is clearly getting plenty saturated, and those steps, or most of them, should now either be unnecessary, or should be run at a much lower power.

#15 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 04 March 2016 - 06:51 PM

Reprocessed with the 5D III matrix from DXO. This is with the same as the original processing:

This has a lot more color noise than any previous version. And this is without most of the saturation boosting steps:

The blues of my original process have shifted more blue-green.

Note, this is still with the original G2V. I do want to try one more time by redoing the G2V after applying the matrix.

Edited by Jon Rista, 04 March 2016 - 06:52 PM.

#16 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 05 March 2016 - 02:11 AM

Jon, weren't those matrix numbers for a 600D.

Mark, where does the 1.879574 number come from? I see the 2.12 from the dxomark website.

Thanks.

Good point!  I didn't explain this in the original post.  For the 600D, DXOMark (http://www.dxomark.c...s#measuretabs-7) gives the matrix for the D50 illuminant as:

2.12  -1.28  0.16

-0.24  1.63  -0.38

0.04  -0.69  1.65

Meanwhile for the 600D another matrix (ColorMatrix2) for the D65 illuminant appears in the Adobe digital negative file

0.6461 -0.0907 -0.0882

-0.4300 1.2184 0.2378

-0.0819 0.1944 0.5931

DCRaw has the same Adobe matrix hardcoded in its source code (each figure needs dividing by 10000):

{ "Canon EOS 600D", 0, 0x3510,
{ 6461,-907,-882,-4300,12184,2378,-819,1944,5931 } },

The Adobe DNG specification can be found here:

The spec explains that ColorMatrix2 is the matrix that goes from XYZ colour space (Google it if you want to know more about XYZ) to the Camera's RGB.

There is another standard matrix that goes from RGB to XYZ for the D65 illuminant (see for instance http://www.easyrgb.c...ATH&H=02#text2)

This matrix is:

0.4124 0.3576 0.1805

0.2126 0.7152 0.0722

0.0193 0.1192 0.9505

Multiplying the first matrix by the second (in that order i.e. the first matrix pre-multiplies the second) gives a matrix that goes from standard RGB to the Camera's RGB for the 600D:

0.245467   0.155663   0.026238

0.086289   0.745977   0.236382

0.01900    0.180445    0.562994

So to go from the 600D CameraRGB to standard RGB we want the inverse of this matrix. This will be applied to our white balanced data. So we need to make sure that the matrix will not change the colour of white. This is done by scaling each row of the above matrix so each row sums to 1.0:

0.574368   0.364237   0.061395

0.080746   0.698056   0.221197

0.024921   0.236668   0.738411

Now we can invert the matrix giving us:

1.879574  -1.03263   0.153055

-0.21962   1.715146  -0.49553

0.006956  -0.51487   1.507914

So now we have two similar matrices that can be used:

The DXOMark for the D50 illuminant:

2.12  -1.28  0.16

-0.24  1.63  -0.38

0.04  -0.69  1.65

The Adobe matrix for the D65 illuminant:

1.879574  -1.03263   0.153055

-0.21962   1.715146  -0.49553

0.006956  -0.51487   1.507914

Part of the difference is the different illuminants but I also think that both DXOMark and Adobe may have created these matrices using their own proprietary camera testing methods. If so, then both will be slightly different to the matrix that Canon uses in its camera and Canon's DPP uses in its raw converter.

You can also find a DXOMark matrix for the CIE-A illuminant on their website and you can find ColorMatrix1 for the Standard-A illuminant in the Adobe DNG file.

Mark

Edited by sharkmelley, 05 March 2016 - 02:13 AM.

• Jon Rista, RareBird and Der_Pit like this

#17 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 05 March 2016 - 02:26 AM

Reprocessed with the 5D III matrix from DXO. This is with the same as the original processing:

...

This has a lot more color noise than any previous version. And this is without most of the saturation boosting steps:

...

The blues of my original process have shifted more blue-green.

Note, this is still with the original G2V. I do want to try one more time by redoing the G2V after applying the matrix.

Very interesting results Jon!

I found a similar saturation problem when I applied the matrix to some of my own data.  It ended up far too saturated for my usual astro workflow.  You are also right that it increases the colour noise.   The hue shift is designed into the matrix - it is intentional it order to get a better match to reference colour charts.

I reckon it might be possible to "scale down" the matrix in some way so it doesn't boost the saturation so much and would therefore slot much better into the usual astro workflow.

BTW is your camera modified or unmodified?

Mark

Edited by sharkmelley, 05 March 2016 - 02:30 AM.

#18 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 05 March 2016 - 02:54 AM

My camera is unmodified. Which is why I am a little surprised at how powerful the reds become after applying the matrix. It's almost like I am looking at modded or CCD data...the reds are so powerful. It even brought out another little circular blob of Ha underneath the blue of the upper right reflection, which I didn't even see at all in my original process.

I still have not yet tried redoing the G2V calibration after applying the matrix. I suspect doing that would probably mitigate the red power a bit. I'll try that tomorrow...tonight, I am imaging for the first time in two months...and this may be my only opportunity for a while. And, my first opportunity with the AT8RC in a very long time, and really my first true use of it as previously, I had serious vignetting problems from my previous OAG.

#19 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 05 March 2016 - 03:03 AM

My camera is unmodified. Which is why I am a little surprised at how powerful the reds become after applying the matrix. It's almost like I am looking at modded or CCD data...the reds are so powerful. It even brought out another little circular blob of Ha underneath the blue of the upper right reflection, which I didn't even see at all in my original process.

I still have not yet tried redoing the G2V calibration after applying the matrix. I suspect doing that would probably mitigate the red power a bit. I'll try that tomorrow...tonight, I am imaging for the first time in two months...and this may be my only opportunity for a while. And, my first opportunity with the AT8RC in a very long time, and really my first true use of it as previously, I had serious vignetting problems from my previous OAG.

Very interesting that you are getting those reds from an unmodified camera, though it probably won't surprise those who use the Tony Hallas or Roger Clark processing methods.

Good luck with tonight's imaging

I've found some further useful technical information here for those who are maybe still struggling with the meaning of this matrix:

http://www.dxomark.c...lor-sensitivity

Edited by sharkmelley, 05 March 2016 - 03:24 AM.

#20 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 05 March 2016 - 03:29 AM

I do image at a pretty dark site most of the time. With the RC, I am in my back yard again, but the Seagull image was from my dark site. There is not much LP out there, on a good night very little indeed. I also get pretty long subs, 7-12 minutes these days. So, every sub is more richly exposed, and I probably get more Ha than most people do with unmodded DSLRs.

My images tonight are unfiltered in 18.5mag/sq" skies...solid red zone. They will probably be FAR less colorful, even if/when I am able to gather about 10 hours or more of data on each.

#21 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 15 March 2016 - 04:02 PM

Some more testing. I finally had ONE clear night a couple nights ago, and imaged three targets. The first is one I've been trying to image for a while, and for which I think I may still need additional time on: NGC2264 region, Cone, Fox Fur, a couple of clusters, and a whole bunch of interesting stuff in the field. I like this image as a test because the whole region has Ha in the background, and that kind of fainter background Ha is more difficult for me to bring out with regular PI processing. I did my standard preliminary calibration and normalization processing in PI, then copied the image and applied Mark's original matrix (it's not exactly right for my 5D III, but I prefer the way it handles the reds better).

Original:

After Matrix:

There has been no other processing on these images outside of: Linear Fit, DBE, BN, CC, and a final pass of TGV.

#22 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 15 March 2016 - 05:50 PM

Fully processed images:

With Matrix (fewer color enhancements post-stretch):

• sharkmelley likes this

#23 sharkmelley

sharkmelley

Fly Me to the Moon

• topic starter
• Posts: 7,246
• Joined: 19 Feb 2013
• Loc: UK

Posted 16 March 2016 - 01:08 AM

Interesting results Jon.  There's not much visible difference between the two - with and without matrix.  It seems to confirm that the main effect of the matrix is an increased saturation of colours.  The hue shifts don't seem to be making a great deal of difference, at least not in this astro-image.

Mark

#24 Jon Rista

Jon Rista

ISS

• Posts: 26,019
• Joined: 10 Jan 2014

Posted 16 March 2016 - 01:14 AM

Oh, I'd say it's more than that. The only thing I used Curves for on the matrix version was a slight contrast boost. Outside of that, the color was 100% from the matrix transform. I did fairly extensive color processing at multiple stages, including carefully tuned SCNR to nuke the green cast, with the traditional version.

I think the biggest benefit from applying the matrix is that it moves blue and red out of green, and green out of blue and red. That basically fixes the geen color cast, saturates the color, and it also seems to improve contrast a little bit. Without the matrix, I have to do all of those things manually.

The downside is that it does seem to increase noise, particularly color noise, quite a bit. So instead of color processing, I instead had to do more extensive noise reduction with the matrix version. Even with additional passes and extra NR steps, the matrix version is still noisier. I may be able to resolve that with improved NR techniques.

There is also a slight difference in color between the two. The matrix version is more magenta/purple, while the traditional version is slightly redder. Personally, I still prefer the traditional version...I feel more varied color nuances were brought out with it, and I like that.

#25 wasyoungonce

wasyoungonce

Viking 1

• Posts: 544
• Joined: 07 Jun 2007
• Loc: Land Downunder

Posted 17 March 2016 - 04:39 AM

Look can I ask a stupid question.

I've noticed my jpg's can be enhanced better than my canon CR2's (canon 450D) I have seen this...way too much.  But where can I apply these RGB off sets?  In DSS? Photo-shop?  Sorry for the ignorance!  I'm struggling with processing!

Brendan

Recent Topics

 Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics