Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

DSLR Processing - The Missing Matrix

  • Please log in to reply
86 replies to this topic

#51 KLWalsh

KLWalsh

    Vanguard

  • *****
  • Posts: 2,010
  • Joined: 19 Feb 2014
  • Loc: North Georgia, USA

Posted 20 September 2017 - 09:55 AM

t_image,
My main point is that, at the end of the day, an astronomical image made after a long exposure with an optic system that differs greatly from the human eye will never satisfy the CIE criteria, so using extreme measures to get 'the right' color can be a fascinating exercise in math and color theory, but cannot be considered 'correct' in the CIE scheme.
Having said that, yes you CAN get something approximating what the eye might see if some neutral magnifying aid were used. 'Neutral' being the key. Because to achieve 'neutral', the optical system must be calibrated.
When a spectroradiometer is calibrated, it is first calibrated with a wavelength standard like a Hg-Ar lamp, then it is calibrated for spectral intensity using a NIST-traceable spectral luminance standard like a 'Source A' halogen lamp. If a telescope + camera system could be calibrated in this manner, then the photos taken would represent a 'neutral magifying' aid.

Clear skies!

#52 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 2,933
  • Joined: 09 Aug 2016
  • Loc: Ellensburg, WA, USA

Posted 20 September 2017 - 12:21 PM

t_image,
My main point is that, at the end of the day, an astronomical image made after a long exposure with an optic system that differs greatly from the human eye will never satisfy the CIE criteria, so using extreme measures to get 'the right' color can be a fascinating exercise in math and color theory, but cannot be considered 'correct' in the CIE scheme.
Having said that, yes you CAN get something approximating what the eye might see if some neutral magnifying aid were used. 'Neutral' being the key. Because to achieve 'neutral', the optical system must be calibrated.
When a spectroradiometer is calibrated, it is first calibrated with a wavelength standard like a Hg-Ar lamp, then it is calibrated for spectral intensity using a NIST-traceable spectral luminance standard like a 'Source A' halogen lamp. If a telescope + camera system could be calibrated in this manner, then the photos taken would represent a 'neutral magifying' aid.

Clear skies!

I'm pretty sure that the point and shoot camera I have isn't NIST-traceable. I don't agree with this idea that we can't have accurate colors. Especially for things like planets. Those are objects illuminated by sunlight. Like flowers and people. Why do we have this notion that because our camera is on a telescope, this becomes an impossible problem?



#53 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 2,933
  • Joined: 09 Aug 2016
  • Loc: Ellensburg, WA, USA

Posted 22 September 2017 - 02:12 AM

Here is the flow for this test image, using IRIS:

 

120 second exposure, ISO3200, f/22, moonlight.

in IRIS:

1) digital photo > decode RAW file

2) Subtract 2048 (which is basically the bias level)

3) RGB Balance red: 2.0 green: 1.0 blue: 2.0

4) subtract my RGB dark master, which was processed the same way as steps 1-3

5) bin 2x

6) find the max in the image and multiply by a constant to bring that to about 16,000

7) save as 48 bit TIFF

8) open in Photoshop and linearly stretch.

 

One picture is a short exposure f/2 camera jpg and the other is the test image.

 

The one that has two squares whose corners touch and they both look yellow is the long exposure test image. The lime green color is incorrect but the others are pretty close; the color mixer in Photoshop (which is the same as the matrix multiply) could I think compensate, but I think it's okay for the most part and most color values can be equalized with a little more saturation. I think doing this is a good check of one's color process.

color_checker_compare.jpg


Edited by 555aaa, 22 September 2017 - 02:13 AM.


#54 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 22 September 2017 - 01:58 PM

Which DSLR camera are you using?  I think your RGB balance is too red and not enough blue.

 

If you are really interested in this stuff there are a couple of excellent practical articles here:

http://www.odelama.c...-Photo-by-hand/

http://www.odelama.c...by-hand_Part-2/

 

Your tip about the channel mixer in Photoshop is a good one (Image->Adjustments->Channel Mixer)

 

Mark



#55 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 26 September 2017 - 12:31 AM

I'm making some good progress with this now.  I can apply the colour balance changes caused by the colour matrix and gamma as a single twist to the colour space.  This can be done before the main stretch in the processing sequence or after the main stretch as along as the main stretch is what I call a "colour preserving" stretch i.e. that for each individual pixel the proportions of R,G&B are left unchanged by the stretch operation.  This is why I generally use the Arcsinh Stretch.

 

The effect of this "colour twist" is pretty subtle in most cases but does have quite an obvious effect on H-alpha regions as in the NGC7000 example below:

 

Before:

NGC7000Before.jpg

 

After:

NGC7000After.jpg

 

When the "colour twist" is performed on a colour chart, again the effect is fairly subtle:

 

Before:

ColourChartBefore.jpg

 

After:

ColourChartAfter.jpg

 

The one example where I'm still finding problems is the image of a spectrum:

 

Before:

SpectrumBefore.jpg

 

After:

SpectrumAfter.jpg

 

The "After" result has a ugly block of solid blue which is not apparent when processing the raw with Adobe Camera Raw and Photoshop.  The spectrum is deliberately a very severe test because every colour on the spectrum is outside the sRGB gamut.  When I apply the "colour twist", I clip to zero any colour component that goes negative.  It is quite possible that professional raw converters take a more sophisticate approach to mapping the out-of-gamut colours.

 

You may notice that the results I show above are less contrasty and less saturated than you might be used to.  This is because in-camera JPG conversion and raw processing in Photoshop typically apply contrast and saturation as standard, which I haven't done with any of the above examples, including the NGC7000.

 

I'm working on a PixInsight script that will apply the "colour twist" as a single processing operation once you know the matrix for your own camera.  I have found it can also be done using Channel Mixing and Gamma operations in Photoshop.  The problem with Photoshop is that I have not found a way to apply a stretch operation similar to Arcsinh Stretch.  PS non-linear stretching operations are not colour preserving.

 

Mark


Edited by sharkmelley, 26 September 2017 - 12:35 AM.

  • t_image likes this

#56 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 26 September 2017 - 02:22 AM

Mark, on my wide gamut screen, I'm seeing more than just an ugly wide block of solid blue. There is variation in that darker blue in the after version of the spectrum. The after spectrum seems to have a wider variety of colors in it as well. It's actually the before spectrum that has a more smooth, "solid" block of lighter blue on my wide gamut screen. The before also seems to be missing the violets, most of the oranges, and the reds seem to mostly be more of a rusty reddish-orange. 



#57 t_image

t_image

    Gemini

  • -----
  • Posts: 3,499
  • Joined: 22 Jul 2015

Posted 27 September 2017 - 10:28 PM

Hey everyone,

So in my further pursuit of advanced color correction,

I put together the linked 24bit PNG (8 bit for R, 8 for G, 8 for B),

of color patches and black-white tones.

If you download them and open them up in PS, using the color picker you should see they range from 0-255 in R,G,B as perfect primaries.

 

If you don't get why this would be helpful, then you may not find it useful.

 

 

It also helps to understand what we often do to visually find details among similar tones that our eyes can't distinguish (ie contrast)....

However, you'd be hard-pressed to try to color correct this so each tone can be distinguished, without having a very wide-gamut and 4000nit HDR monitor with Open EXR or something.....

It shows we often have to make choices if we want to distinguish details...

However, if you "look" at the tones on a wimpy monitor, your eyes may not betray the changes.

But good tools will show you what's up.

It's also a quick way (looking at a waveform monitor) to understand what Lift, Gamma, Gain are all about and how they modify the values.....

Try also different app's concepts of what brightness/saturation etc do....

 

Additionally, its a quick way to see if color transforms do have weaknesses.....

 

Here's the RGB waveform to convey what I mean:
waveform.jpg

 

link:

https://imgur.com/a/e5C4X

 

Enjoy!


Edited by t_image, 27 September 2017 - 10:36 PM.

  • sharkmelley likes this

#58 Mabula

Mabula

    Explorer 1

  • -----
  • Vendors
  • Posts: 69
  • Joined: 21 Mar 2016

Posted 28 September 2017 - 07:48 AM

Hi Mark and everyone else in this topic,

 

I was pointed to this discussion by Rudy Pol, so I hope you don't mind that I jump into this discussion.

 

I am the developer of Astro Pixel Processor or APP.

 

Currently APP doesn't do the color matrix conversion  from camera to XYZ to sRGB/Adobe1998. APP Just uses the linear sensor data. APP users can apply the camera white balance or not, or apply a 3-parameter white balance of their choice

 

But... the steps needed to use the color matrix conversion are already implemented in APP. For all supported DSLR camera's the camera specific matrices are stored into APP. Furthermore, in APP, you can view DLSR images in non-linear sRGB/Adobe1998 colorspaces using the image viewer mode: image. This uses the two 3x3 matrix conversions needed.

 

Based on your testing, how would you recommend to implement this? Just apply the conversion to the data in the final step of loading the image? Or apply the conversion on the integration result?

 

Kind regards,

Mabula


Edited by Mabula, 28 September 2017 - 12:35 PM.


#59 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 28 September 2017 - 06:17 PM

Mark, on my wide gamut screen, I'm seeing more than just an ugly wide block of solid blue. There is variation in that darker blue in the after version of the spectrum. The after spectrum seems to have a wider variety of colors in it as well. It's actually the before spectrum that has a more smooth, "solid" block of lighter blue on my wide gamut screen. The before also seems to be missing the violets, most of the oranges, and the reds seem to mostly be more of a rusty reddish-orange. 

I think I know what's happening here.  The problem is that I'm using an H-alpha modified camera.  Without adjusting the colour balance an image therefore generally appears too red.  So to make white objects appear as white then the red channel has to be scaled down.  But for an image of a spectrum it means that each wavelength of the spectrum therefore has too little red.  So the spectral lines no longer appear properly as their usual colour.

 

So when using a modified camera it's impossible to achieve the usual colour for broadband objects such as stars and diffuse white simultaneously with the usual colour for narrowband objects such as spectral lines.  It might be possible to tweak the colour matrix to give some kind of compromise between the two.  I wonder what Nikon do for their D810A camera?  The D810A is reported to work reasonably well for normal daylight photography except for the most demanding users.  It looks like Adobe Camera Raw supports the D810A so it might be possible to extract a matrix from a DNG file to compare with the D810 matrix.

 

Mark



#60 dayglow

dayglow

    Ranger 4

  • -----
  • Posts: 343
  • Joined: 15 Aug 2013

Posted 29 September 2017 - 11:30 PM

I looked into DCRAW code which has the CCM for D810 but it has no support for D810a and thus no matrix from which to compare values.  The DCRAW code indicates that these matrices come from Adobe DNG Converter but does not say these are the entire set of cameras supported by DNG converter.

 

-- David F.



#61 dayglow

dayglow

    Ranger 4

  • -----
  • Posts: 343
  • Joined: 15 Aug 2013

Posted 29 September 2017 - 11:39 PM

DCRAW does support both Canon 20D and 20Da and the matrix values for those two cameras are quite different.

 

from that code:

 

Canon EOS 20D
6599     -537     -891
-8071   15783      242
-1983    2234     7462

 

Canon EOS 20Da
14155   -5065    -1382
-6550   14633     2039
-1623    1824     6561

 

I believe the matrix ordering to be RGB left to right across columns and RGB top to bottom down rows.

 

-- David F.


  • t_image likes this

#62 tommy_nawratil

tommy_nawratil

    Sputnik

  • -----
  • Posts: 38
  • Joined: 03 Jul 2010
  • Loc: Austria/Europe

Posted 30 November 2017 - 05:44 PM

hi,

 

I just can't get the point: Why do you want to use a color scheme for daytime terrestrial photography on deepsky objects?

In a normal processing workflow color calibration of some kind is an important point. G2V or by catalogue or defining white by all present stars combined.

Why doing color adjustments in two different and probably contrary ways?

We are normally not using the daytime RGB color space parameters for a good reason in astro pic processing.

Example: Our pictures show H-Alpha in strong red but our eyes have almost no sensitivity for it.

There should be none in our photos if you apply the sensitivity curve.

 

Tommy


Edited by tommy_nawratil, 30 November 2017 - 05:59 PM.


#63 t_image

t_image

    Gemini

  • -----
  • Posts: 3,499
  • Joined: 22 Jul 2015

Posted 01 December 2017 - 02:53 PM

hi,

 

I just can't get the point: Why do you want to use a color scheme for daytime terrestrial photography on deepsky objects?

In a normal processing workflow color calibration of some kind is an important point. G2V or by catalogue or defining white by all present stars combined.

Why doing color adjustments in two different and probably contrary ways?

We are normally not using the daytime RGB color space parameters for a good reason in astro pic processing.

Example: Our pictures show H-Alpha in strong red but our eyes have almost no sensitivity for it.scratchhead2.gif

There should be none in our photos if you apply the sensitivity curve.

 

Tommy

?????^there seems a mess of confusion here. I can't get your point..

Are you asking or telling us?

Your example does not make sense. Do you mean UV light? Do you mean IR light?

H-alpha (656.3nm) is strong red. Have you not ever peered into a hydrogen alpha telescope?

Do you mean to tell me the red that people see through such a scope's eyepiece isn't 656.3nm wavelength?

Somehow magically the filter converts the light into some other frequency that our eyeballs can see?

Do you mean the person is really seeing some brighter red at 580nm or so and couldn't possibly be seeing 656.3nm red?

FWIW a h-a scope lets through the most narrow wavelength of h-a, not any other shade of red.

Better yet, take a NB h-alpha filter and hold it up to your eye and look towards a light source.

Even better, do it at night towards the Moon. What color do you see?

 

Additionally,

it's a little absurd to argue how something appears to the eye at night,

when we are taking astrophotography,

since images are representing hours worth of exposure time.

None of such low-light render in an image

we can see immediately with our eyes...Hence a camera.

Stellar objects do really radiate light that pass through different colored filters at a particular intensity and combination fall on sensor pixels.....

It is a valid process to use methods to calibrate the values off the sensor and equate them with corresponding color frequency values......

And a process to objectively quantify them and represent them in an image can be done in more than one way.....

 

Color spaces and the color response of our eye is about signal efficiency, not about whether a color should be represented....The point is to not waste bits and efforts rendering out-of-gamut colors.

 

To say there should be no h-alpha in images is absurd,

you might as well say since some colors in the real world are out of gamut of sensor/encoding/display, that we might as well not try to image at all......

 

You might should have addressed more precisely the issue of proportionality,

however that point in AP is just as absurd,

since such must be disregarded due to the limited dynamic range of encoding/displays (for example).

In the same way we "stretch",

and render shadow details in the same image with bright highlights,

in much flatter range than reality,

it is equally valid to render a color that in your "perception" would be low in proportion compared with a (lets say a green comet transiting across a red nebula).....so that both the green of the comet and the red of the h-alpha nebula are perceivable to the viewer of the image......

We do this all the time.....

Try again with your objection.......



#64 NorbertG

NorbertG

    Explorer 1

  • -----
  • Posts: 86
  • Joined: 02 Mar 2015

Posted 02 December 2017 - 04:14 AM

maybe there is some misunderstanding here. The rods are not or nearly not sensitive to the Ha light, but the cones are. But matching the response of the image train to the rods makes no sense at all. Some people might try to get impression of the object how it looks like, when they are so close that they could see it with their eyes, i.e. with the cones. But this "truth" would be very colorless and pale anyway.   


Edited by NorbertG, 02 December 2017 - 04:16 AM.


#65 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 02 December 2017 - 05:25 AM

I just can't get the point: Why do you want to use a color scheme for daytime terrestrial photography on deepsky objects?

In a normal processing workflow color calibration of some kind is an important point. G2V or by catalogue or defining white by all present stars combined.

 

We're talking about DSLR (or one shot colour) cameras here. 

 

The RGB filters on the sensor are unlike the sharp cut off filters normally used in astronomy. Instead the DSLR filters are designed to have a response curve better matched to human vision or at least to have a good match once a linear transformation has been applied.  Since this is the way a DSLR has been purposefully designed, we might just as well use it in the way it was designed and produce images that in some broad sense match human vision. 

 

However I don't want to use the "Tony Hallas/Roger Clark" Adobe Camera Raw style workflow (which automatically applies this transformation) because I want to apply proper calibration frames.  The transformation would normally be applied after the G2V or catalogue-based white balance.

 

Mark


Edited by sharkmelley, 02 December 2017 - 05:51 AM.

  • poobie likes this

#66 Guest_11558

Guest_11558

    Skylab

  • -----
  • Posts: 4,276
  • Joined: 06 Sep 2017

Posted 06 December 2017 - 02:34 PM

This matrix is quite an old story, invented in the 60s of the last century. That time color TV came up and the selection of chemicals that shine in the base colors red, green and blue was limited (rare earths). The goal of color TV was to produce a broadcast signal that could be used to address the channels in the viewers TV set directly (to keep the technology simple). The camares picked up the image with tubes but with a completely different spectral sensitivity. Basically there is no way to fully restore color using 3 channels only and once the spectral information is reduced to only 3 signals you can't do much. The little you can do to improve the situation is create each signal for the TV set from a mix of all 3 camera signals. So the matrix was invented. Most cameras that claim to have an output signal according to the old TV standards still use a matrix today. Using cameras for color measurement in industry it caused me some gray hair to get the signal out befor it was multiplied. Little surprise image processing programs still us it. There are many ways to manipulate color. This is one of them.

Up to this point this has nothing to do with white balance. Using the old TV story again: If the users TV set lights up it's green, blue and red with the same intensity (photometrically speaking, not same physical radiation energy as our eyes have there own sensitivity curves) this appears as pure white. Now question is, which thing under what sort of light in the studio shall create that white in the TV set? Broadcasters agreed on a certain type of typical studio illumination to be white. Now if broadcasting for the outside with sunlight things change color. To make the reporters white shirt white again a white balance is needed. This is done only by changing the relation of the three channels, i.e. reduce two of them and keep the level of the faintes channel. This time there is no mix between colors. This is still the case today.

Now if you have 5 channels like LRGBHa there is a lot you can mix.... What most of us do here is aesthetic photography, so we all mix to taste and some like it and some don't. One can debate about G2V or the PI color agnostic approach. One can follow a lot of strategies to create "natural" colors. It is importand to see the difference between a method that changes (or corrects or enhances) color and the strategy and goal like saying "G2V is white". Any goal can be achieved by many different methods and differen philosophies may all create nice or ugly results. Beauty is always in the eye of the beer holder :-)

Nevertheless, a good idea to post that concept here! Thank you, Mark!


  • bobzeq25 and t_image like this

#67 Guest_11558

Guest_11558

    Skylab

  • -----
  • Posts: 4,276
  • Joined: 06 Sep 2017

Posted 11 December 2017 - 02:10 PM

"It is quite possible that professional raw converters take a more sophisticate approach to mapping the out-of-gamut colours."

 

Mark,

 

I ran into a problem of not only out ouf gamut but out of the CIE xy horseshoe when applying a simple linear color calibration that was generated from a smaller gamut (due to technical limits of the mixing light source). Having the CIE curve (you can download an excel file from CIE) my approach was to calculate a straight line from the out of the real world point to the white point and push it back along that line unitl it hits the horseshoe. So a avoid non existing color. Second thing I found is that the matrix for RGB->CIE and back may lead to out of range values if you have an integer representation. So I always devide by 2 (bit shift for performace, I do real time) when moving from RGB to CIE to avoid that.

A cameras spectral sensitivity is always different from our eyes but there is one very remakable point: our eyes red sensitivity comes back in deep blue making the border to UV deep purple for us. Some cameras have this as well, other do not. This is a mayor deep blue issue. Another point that really ruins things is how extraordinary clear the combinations of #00FFFF, #FF00FF and #FFFF00 are. When I use a rainbow color scale for false color images (L mapped to blue->green->yellow->red) I never use the pure colors. Using 0x10 instead of 0x00 avoids that shiny clearness that appears like a line in a mathematically continous spectrum. (Not to speak about monitor calibration.) No easy ground you are moving on...

 

The attatchment it is taken with my unmodded EOS600d using a monochromator in my lab. Output is about 6nm wide, 10nm steps. I placed the sensor right at the output slot without any optics. You must imagine the missing dots. I started at 380nm but the first the camera reacts to is (5th position in the first row) is 420nm. I went on to 700nm, but the last one you see is 680nm. Halpha is well included with about 35% of its maximum sensitivity. No need to mod it, I think. (Of course the monochromator is not creating constant level. I measured the levels with a calibrated spectrometer afterwards and did some math.)

Attached Thumbnails

  • 20170727.jpg

Edited by the Elf, 11 December 2017 - 02:29 PM.


#68 dsochaser

dsochaser

    Messenger

  • -----
  • Posts: 425
  • Joined: 06 Oct 2016

Posted 12 December 2017 - 08:42 AM

Hi Mark, I've been working on this problem with a friend because we use Nebulosity and Pentax and we have had to work out the colour matrix you're talking about.

After reading this whole thread, the only issue where I think you went wrong is the topic of gamma. The gamma is applied at the "output" to compensate for display->eye response. You do not need it in your processing.

Not only that, you do not need to convert to sRGB because that is also based on your output.

You only need to convert to the “connection space”, CIE XYZ

Your image processing will manipulate the data in this colour space, output in sRGB to the screen, and your OS will adjust the sRGB using gamma for the screen.

So in my case, I’ve settled on this equation:

RGBxyz = (CMdxo * WBdxo) * RGBraw

Where CMdxo and WBdxo are the colour matrix and white balance provided on DXOMARK’s camera/sensor testing results.

If you have an un-modded camera, this will give you true colour images that require no further colour balancing, even after stacking and stretching.

If I’m wrong on this I welcome a correction.

Edited by gbeaton, 12 December 2017 - 08:44 AM.


#69 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 12 December 2017 - 11:26 AM

After reading this whole thread, the only issue where I think you went wrong is the topic of gamma. The gamma is applied at the "output" to compensate for display->eye response. You do not need it in your processing.

Not only that, you do not need to convert to sRGB because that is also based on your output.

You only need to convert to the “connection space”, CIE XYZ

Your image processing will manipulate the data in this colour space, output in sRGB to the screen, and your OS will adjust the sRGB using gamma for the screen.

So in my case, I’ve settled on this equation:

RGBxyz = (CMdxo * WBdxo) * RGBraw

Where CMdxo and WBdxo are the colour matrix and white balance provided on DXOMARK’s camera/sensor testing results.

 

What an interesting idea - I've never thought of that.  I'll give it a try.  I've never worked in alternate colour spaces.

 

If I understand you correctly, what you're saying is that I just need to set my working colour space to CIE RGB then I can open the raw data, debayer, subtract the bias and apply the WB and CM.  The image will be correct in CIE RGB and it will appear on the monitor just like a normal photo?

 

Then how do I save it as an sRGB jpg so I can share the image with other people?

 

Mark


Edited by sharkmelley, 12 December 2017 - 11:28 AM.


#70 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 12 December 2017 - 02:12 PM

 

After reading this whole thread, the only issue where I think you went wrong is the topic of gamma. The gamma is applied at the "output" to compensate for display->eye response. You do not need it in your processing.

Not only that, you do not need to convert to sRGB because that is also based on your output.

You only need to convert to the “connection space”, CIE XYZ

Your image processing will manipulate the data in this colour space, output in sRGB to the screen, and your OS will adjust the sRGB using gamma for the screen.

So in my case, I’ve settled on this equation:

RGBxyz = (CMdxo * WBdxo) * RGBraw

Where CMdxo and WBdxo are the colour matrix and white balance provided on DXOMARK’s camera/sensor testing results.

 

What an interesting idea - I've never thought of that.  I'll give it a try.  I've never worked in alternate colour spaces.

 

If I understand you correctly, what you're saying is that I just need to set my working colour space to CIE RGB then I can open the raw data, debayer, subtract the bias and apply the WB and CM.  The image will be correct in CIE RGB and it will appear on the monitor just like a normal photo?

 

Then how do I save it as an sRGB jpg so I can share the image with other people?

 

Mark

 

 

My mistake, you said CIE XYZ not CIE RGB (CIE RGB being a colour space with gamma of 2.2 so it's broadly similar to sRGB.)

 

I think I now understand what you're saying and there are a couple of flaws that I see in your logic.  Firstly the DXO colour matrix is a matrix designed to have sRGB as its destination colour space so it's not the correct one to use for CIE XYZ.  Secondly there is little point working in the CIE XYZ colour space because it won't display correctly until it has been transformed into a colour space that your monitor understands such as sRGB, proPhoto, AdobeRGB etc. - all of which need gamma to be applied.

 

See http://www.brucelind...gSpaceInfo.html

 

Of course, I'm assuming here you don't have CIE XYZ as a colour working space in your processing software. 

 

Mark


Edited by sharkmelley, 12 December 2017 - 02:13 PM.


#71 dsochaser

dsochaser

    Messenger

  • -----
  • Posts: 425
  • Joined: 06 Oct 2016

Posted 12 December 2017 - 07:20 PM

 

 

After reading this whole thread, the only issue where I think you went wrong is the topic of gamma. The gamma is applied at the "output" to compensate for display->eye response. You do not need it in your processing.

Not only that, you do not need to convert to sRGB because that is also based on your output.

You only need to convert to the “connection space”, CIE XYZ

Your image processing will manipulate the data in this colour space, output in sRGB to the screen, and your OS will adjust the sRGB using gamma for the screen.

So in my case, I’ve settled on this equation:

RGBxyz = (CMdxo * WBdxo) * RGBraw

Where CMdxo and WBdxo are the colour matrix and white balance provided on DXOMARK’s camera/sensor testing results.

 

What an interesting idea - I've never thought of that.  I'll give it a try.  I've never worked in alternate colour spaces.

 

If I understand you correctly, what you're saying is that I just need to set my working colour space to CIE RGB then I can open the raw data, debayer, subtract the bias and apply the WB and CM.  The image will be correct in CIE RGB and it will appear on the monitor just like a normal photo?

 

Then how do I save it as an sRGB jpg so I can share the image with other people?

 

Mark

 

 

My mistake, you said CIE XYZ not CIE RGB (CIE RGB being a colour space with gamma of 2.2 so it's broadly similar to sRGB.)

 

I think I now understand what you're saying and there are a couple of flaws that I see in your logic.  Firstly the DXO colour matrix is a matrix designed to have sRGB as its destination colour space so it's not the correct one to use for CIE XYZ.  Secondly there is little point working in the CIE XYZ colour space because it won't display correctly until it has been transformed into a colour space that your monitor understands such as sRGB, proPhoto, AdobeRGB etc. - all of which need gamma to be applied.

 

See http://www.brucelind...gSpaceInfo.html

 

Of course, I'm assuming here you don't have CIE XYZ as a colour working space in your processing software. 

 

Mark

 

Pardon me, you understand it well. I used the RGB XYZ loosely. I really meant working space. (XYZ could be thought of as a working space since it can be converted to any other colour space.)

 

But If you read closely, you'll see that DXO is converting from RGBraw to sRGB "primaries", which is not the same thing as sRGB space. The DXO matrices will convert the RGBraw to a "working space" which has the same primaries as sRGB but a far greater gamut than sRGB and no gamma encoding. When the image is converted to JPEG or displayed on your screen, only then is it converted to sRGB space or display gamma applied (http://www.cambridge...-correction.htm).

 

Once in this working space, you can just process it to satisfaction and you get out what you see on the screen.

 

EDIT: By the way, I tested this and got my image in Nebulosity matching what is shown in Adobe Lightroom.


Edited by gbeaton, 12 December 2017 - 07:26 PM.


#72 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 13 December 2017 - 03:14 PM

 

But If you read closely, you'll see that DXO is converting from RGBraw to sRGB "primaries", which is not the same thing as sRGB space. The DXO matrices will convert the RGBraw to a "working space" which has the same primaries as sRGB but a far greater gamut than sRGB and no gamma encoding. When the image is converted to JPEG or displayed on your screen, only then is it converted to sRGB space or display gamma applied (http://www.cambridge...-correction.htm).

 

Once in this working space, you can just process it to satisfaction and you get out what you see on the screen.

 

EDIT: By the way, I tested this and got my image in Nebulosity matching what is shown in Adobe Lightroom.

 

What you say is correct.  If you apply the colour matrix followed by the gamma then you are in the sRGB working space and the image "looks correct" on your screen.  This works fine for terrestrial photos.

 

However, there is a subtlety here.  Both the colour matrix (CM) and the gamma affect the proportions of RGB in each pixel. In order for the colours to look correct on your screen then both operations need to be performed.  This causes a problem for astro-image processing where we typically have very high dynamic ranges.  In theory it is possible to apply the CM and gamma, then stretch the data somehow to make the whole dynamic range visible i.e. faint objects not lost in the shadows.  But such a stretch is very difficult to achieve in practice.  Also this final stretch must be a colour preserving stretch.

 

So what we typically do in astro-image processing is to perform a colour preserving stretch such as the arcsinh stretch.  We now have the whole dynamic range visible but the colours aren't quite correct because the CM hasn't been applied.  But if we apply the CM we also have to apply the gamma, in order to see the correct colours.  But the gamma will wreck the stretching we have carefully done.

 

I see 2 possible solutions if we want to perform an arbitrary stretch to the astro-image but still use the CM:

Either:

1)  Take the linear data and apply the CM and gamma to get the correct colours.  Now strip off the colour and save it.  Go back to the linear data and perform the arbitrary stretch to make the whole dynamic range visible.  Finally combine the stripped off colour with the luminance from the stretched data.

Or:

2)  Take the linear data and apply a colour preserving arbitrary stretch to make the whole dynamic range visible.  Now calculate the effect on the RGB colour ratios of applying the CM and gamma - I  call this a twist of the colour space.  Apply this twist to colour preserved stretched image.

 

Mark


Edited by sharkmelley, 13 December 2017 - 03:21 PM.


#73 Theo950

Theo950

    Lift Off

  • -----
  • Posts: 1
  • Joined: 28 Feb 2021

Posted 28 February 2021 - 11:06 AM

Hi Mark. I am relatively new to AP, but have been reading with great interest about the debate regarding a standardised colour calibrated workflow when processing (astro-) images (cfr Roger Clark's method, the threads about colour matrix correction on CN, Dpreview, APP,...).

 

I find all those point of views very interesting and must say I would like to attempt developing an accurate colour calibrated workflow. On the other hand, as a newbie this is far from being easy, and there is a conflicting information to be found online.

 

As far as I understand, applying the colour matrix only makes sense if we apply a gamma-curve afterwards. But this curve messes up the linear data...

 

So what we typically do in astro-image processing is to perform a colour preserving stretch such as the arcsinh stretch.  We now have the whole dynamic range visible but the colours aren't quite correct because the CM hasn't been applied. But if we apply the CM we also have to apply the gamma, in order to see the correct colours.  But the gamma will wreck the stretching we have carefully done.

...

2)  Take the linear data and apply a colour preserving arbitrary stretch to make the whole dynamic range visible.  Now calculate the effect on the RGB colour ratios of applying the CM and gamma - I  call this a twist of the colour space.  Apply this twist to colour preserved stretched image.

So we would like to apply the colour matrix and gamma curve, and afterwards correct the new RGB-pixel values by original_luminance/new_luminance (found this on APP-thread about this topic smile.gif ). After doing this, we can perform an arcsinh stretch.

 

I'm working on a PixInsight script that will apply the "colour twist" as a single processing operation once you know the matrix for your own camera.  I have found it can also be done using Channel Mixing and Gamma operations in Photoshop. 

I was wondering if this "colour-twist"-script has been developped yet, and if so, where we could find it. This would be a great help for developping a colour calibrated workflow.

 

If not, I think I've understood the main principles and could try to apply them using the existing processes in Pixinsight. However, one question remains: when using the AutoHistogram-process, how do you know which gamma-exponent setting to apply to achieve the intended gamma-tonal curve?

 

I know there are a lot of questions, and chances are high I haven't understood the whole topic quite well. Sorry for that, just trying to get a better understanding on the whole topic smile.gif.

Clear Skies!

Theo


Edited by Theo950, 28 February 2021 - 11:10 AM.


#74 sharkmelley

sharkmelley

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 7,369
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 28 February 2021 - 01:01 PM

Hi Mark. I am relatively new to AP, but have been reading with great interest about the debate regarding a standardised colour calibrated workflow when processing (astro-) images (cfr Roger Clark's method, the threads about colour matrix correction on CN, Dpreview, APP,...).

 

I find all those point of views very interesting and must say I would like to attempt developing an accurate colour calibrated workflow. On the other hand, as a newbie this is far from being easy, and there is a conflicting information to be found online.

My thinking on this has evolved quite considerably since this thread started. 

 

With linear data everything is actually very easy if you are working with a linear profile.  With a linear profile the display device is driven so that the brightness of a pixel on the screen is directly proportional to the data value and the image therefore appears correct to the eye.  Photoshop and Affinity Photo both use linear profiles in 32bit mode e.g. sRGB Linear or AdobeRGB Linear.  PixInsight can be also be set use a linear profile in its ColorManagement options (as long as you have an appropriate linear ICC Profile file available). 

 

With a linear profile, as soon as you have white balanced the data and applied the colour correction matrix appropriate for the camera and the working colour space then the colours you see on your display device are the correct colour-managed colours.  You can carry on working with the data in the linear domain and the colours still display correctly as long as non-linear stretching is avoided. Colour preserving stretches such as ArcsinhStretch can be used to change the brightness of the data without affecting hue or saturation.

 

The transformation to a standard non-linear ICC Profile (such as sRGB or AdobeRGB) can be done as a final step in the processing and will apply the relevant colour space gamma without changing the appearance of the image in any way.  After this transformation the data are non-linear, of course.

 

My current processing workflow in PixInsight uses an AdobeRGB Linear profile but as I said earlier, 32bit modes in both Photoshop and Affinity Photo can be used instead. Sometime I'll get around to providing detailed instructions for those interested.

 

Mark


Edited by sharkmelley, 28 February 2021 - 02:13 PM.

  • Alen K and galacticinsomnia like this

#75 galacticinsomnia

galacticinsomnia

    Gemini

  • *****
  • Posts: 3,285
  • Joined: 14 Aug 2020
  • Loc: Pacific Northwest - Oregon

Posted 01 March 2021 - 09:40 PM

My thinking on this has evolved quite considerably since this thread started. 

 

With linear data everything is actually very easy if you are working with a linear profile.  With a linear profile the display device is driven so that the brightness of a pixel on the screen is directly proportional to the data value and the image therefore appears correct to the eye.  Photoshop and Affinity Photo both use linear profiles in 32bit mode e.g. sRGB Linear or AdobeRGB Linear.  PixInsight can be also be set use a linear profile in its ColorManagement options (as long as you have an appropriate linear ICC Profile file available). 

 

With a linear profile, as soon as you have white balanced the data and applied the colour correction matrix appropriate for the camera and the working colour space then the colours you see on your display device are the correct colour-managed colours.  You can carry on working with the data in the linear domain and the colours still display correctly as long as non-linear stretching is avoided. Colour preserving stretches such as ArcsinhStretch can be used to change the brightness of the data without affecting hue or saturation.

 

The transformation to a standard non-linear ICC Profile (such as sRGB or AdobeRGB) can be done as a final step in the processing and will apply the relevant colour space gamma without changing the appearance of the image in any way.  After this transformation the data are non-linear, of course.

 

My current processing workflow in PixInsight uses an AdobeRGB Linear profile but as I said earlier, 32bit modes in both Photoshop and Affinity Photo can be used instead. Sometime I'll get around to providing detailed instructions for those interested.

 

Mark

Definitely interested. 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics