Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

DSLR/Mirrorless Consistent Color Processing

  • Please log in to reply
436 replies to this topic

#26 vidrazor

vidrazor

    Fly Me to the Moon

  • *****
  • Posts: 6,819
  • Joined: 31 Oct 2017
  • Loc: North Bergen, NJ, USA

Posted 25 April 2025 - 08:48 PM

Instead, in AP you get the dreaded purple star cores from the differing dynamic ranges between the channels. The Siril developers recommended simply desaturating the cores, but the Highlights reconstruction is a RawTherapee function I wish existed in Siril (or that RawTherapee could apply to a TIFF and not just a raw file).

You can recover blown stars in Siril. Go to the Star Processing tab, hit the gear icon to open the Dynamic PSF box, then click on Detect Stars. After finding the stars, any stars you see circled in purple are saturated. Click on Desaturate Stars. and Siril will recover the star cores.

Attached Thumbnails

  • desat stars.jpg

  • timaras likes this

#27 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 26 April 2025 - 05:11 AM

Yep, that's the function Adrian showed me.



#28 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 26 April 2025 - 11:21 AM

 

  • The tone response curve of the colour space is an interesting problem.  The end goal is for the brightness of each pixel on the display to be proportional to the brightness of the corresponding point in the scene being imaged.  But the display chain for non-colour-managed images assumes that the image is sRGB, including the sRGB tone response curve (the variable gamma curve) and so the display chain "undoes" this transformation before displaying it on screen.  The net result is that linear data is not displayed correctly because the display chain is stretching the data, making it appear far too dark and contrasty.  The best way to avoid this is use a linear working profile or if the processing software is not colour-managed you can apply the TRC (tone response curve) to the data.  But if you apply a TRC that is variable gamma (like for sRGB) then subsequent colour preserving stretches will actually distort colour because brightening the pixel moves it to a part of the TRC where the gamma is different.  However a colour space such as AdobeRGB has a constant gamma TRC and so subsequent colour preserving stretches are OK.  But the CCM for the AdobeRGB colour space is different to the CCM for the sRGB colour space (since AdobeRGB and sRGB have different colour primaries) and working with AdobeRGB will only display correctly in proper colour managed software.

 

@sharkmelley Let me see if I understand this well. Let's assume we work with PI that supports Color Profiles (Siril would be similar), and that I am at the stage where the data are stacked and still linear (unstretched), and an sRGB monitor.

The default PI working profile is sRGB, I presume with the normal γ=2.2. So even though the image data are linear, PI will treat them as sRGB, reverse the sRGB gamma TRC, then send it to the GPU/display which will apply the monitor's TRC (a similar gamma?) for drawing. So they should appear linear at the end. The displayed image will be dark, but not darker than pure linear, right? Or are you implying that the displayed data are squeezed darker via the inverse TRC?



#29 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 26 April 2025 - 11:30 AM

The default working space of all image processing apps is gamma-compressed sRGB. Even though raw data is linear, when you place linear data in gamma-compressed space, the display chain will gamma correct it, i.e., (RGB)^2.2. Since the RGB values are on the interval [0,1], they get smaller when squared.



#30 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 8,288
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 26 April 2025 - 11:35 AM

@sharkmelley Let me see if I understand this well. Let's assume we work with PI that supports Color Profiles (Siril would be similar), and that I am at the stage where the data are stacked and still linear (unstretched), and an sRGB monitor.

The default PI working profile is sRGB, I presume with the normal γ=2.2. So even though the image data are linear, PI will treat them as sRGB, reverse the sRGB gamma TRC, then send it to the GPU/display which will apply the monitor's TRC (a similar gamma?) for drawing. So they should appear linear at the end. The displayed image will be dark, but not darker than pure linear, right? Or are you implying that the displayed data are squeezed darker via the inverse TRC?

A simpler way to look at it is that in your example you have linear data but you've told the system it is sRGB.  Therefore it will be displayed incorrectly.  That's all you need to know.  Don't worry about which processing steps happen at which points in the display chain because I don't know what the sequence of steps is, either.



#31 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 26 April 2025 - 06:01 PM

The default working space of all image processing apps is gamma-compressed sRGB. Even though raw data is linear, when you place linear data in gamma-compressed space, the display chain will gamma correct it, i.e., (RGB)^2.2. Since the RGB values are on the interval [0,1], they get smaller when squared.

 

A simpler way to look at it is that in your example you have linear data but you've told the system it is sRGB.  Therefore it will be displayed incorrectly.  That's all you need to know.  Don't worry about which processing steps happen at which points in the display chain because I don't know what the sequence of steps is, either.

 

OK this makes sense. Presumably that is why, in a non-color managed application, you need to apply the sRGB TRC manually (eg via pixelmath). But until that manual TRC step is done, the displayed image will be off. I guess in that linear stage maybe it does not matter as much, as the operations we may do (background removal, noise reduction, etc.) do not rely on how it looks on the display?

I presume that's the benefit of setting a linear gamma working color space, the linear image data will display correctly.

Assuming we are working in the default variable-gamma sRGB profile with the linear stacked data (initially), we then start applying (hopefully color-preserving) stretches. So what we see on the screen is always....incorrect?

Let's say we apply histogram transformations to the image data so that the brightness of the image content is satisfactory to our eyes as displayed. What happens then when we save/export as a JPG and assign the sRGB profile? Should the TRC still be manually applied, even though the image as displayed looks fine?

 

 



#32 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 8,288
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 26 April 2025 - 11:09 PM

Let's say we apply histogram transformations to the image data so that the brightness of the image content is satisfactory to our eyes as displayed. What happens then when we save/export as a JPG and assign the sRGB profile? Should the TRC still be manually applied, even though the image as displayed looks fine?
 

Maybe the image will look fine in terms of brightness but the colours from the original scene will be reproduced incorrectly because the display chain is deliberately applying a gamma transformation to undo the sRGB TRC that it assumes has been applied.  You can see this effect with the raw from a daylight terrestrial image



#33 martz

martz

    Explorer 1

  • -----
  • Posts: 82
  • Joined: 26 Aug 2020

Posted 27 April 2025 - 04:23 AM

OK this makes sense. Presumably that is why, in a non-color managed application, you need to apply the sRGB TRC manually (eg via pixelmath). But until that manual TRC step is done, the displayed image will be off. I guess in that linear stage maybe it does not matter as much, as the operations we may do (background removal, noise reduction, etc.) do not rely on how it looks on the display?

I presume that's the benefit of setting a linear gamma working color space, the linear image data will display correctly.

Assuming we are working in the default variable-gamma sRGB profile with the linear stacked data (initially), we then start applying (hopefully color-preserving) stretches. So what we see on the screen is always....incorrect?

Let's say we apply histogram transformations to the image data so that the brightness of the image content is satisfactory to our eyes as displayed. What happens then when we save/export as a JPG and assign the sRGB profile? Should the TRC still be manually applied, even though the image as displayed looks fine?

 

 

Incidentally, Siril 1.4.0 Beta 1 was released yesterday.  I again encourage you to review its color management documentation because it will help your understanding of all this.  For instance, its workflow page has the following suggestion regarding what to do once you get to a nonlinear stage:

 

When you're ready to stretch your image, it's time to think about your color space again. Stretching changes the image from linear data to non-linear data so that it looks pleasing to the human eye. You're going to make your data non-linear now, so before stretching is a good time to convert the image to your chosen nonlinear color space, be it sRGB or Rec2020 or another color space of your preference. You can either do it yourself manually[*], or you can set a preference for Siril either to prompt you to convert the image color space to your preferred color space, or to do it automatically.

You can now carry on and finish any post-stretch editing of your image.

 

[* Here "manually" does not mean using pixel math.]  

 

As BQ suggested, you can instead use the square-root preview in Siril once you begin stretching since "square-root preview (gamma = 2) is the closest to simulating a display chain gamma correction of 2.2."  That is what I personally do because I asinh stretch in Siril but then jump into Affinity Photo for layer-based editing and convert to sRGB there as the very last step in the workflow.

 

Note that different editing software may treat your data differently for display purposes, so it is difficult to address your questions with answers that will have universal application.  For instance, see James Ritson's explanation of what Affinity Photo does here.        



#34 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 27 April 2025 - 04:26 AM

OK this makes sense. Presumably that is why, in a non-color managed application, you need to apply the sRGB TRC manually (eg via pixelmath). But until that manual TRC step is done, the displayed image will be off. I guess in that linear stage maybe it does not matter as much, as the operations we may do (background removal, noise reduction, etc.) do not rely on how it looks on the display?

I presume that's the benefit of setting a linear gamma working color space, the linear image data will display correctly.

Assuming we are working in the default variable-gamma sRGB profile with the linear stacked data (initially), we then start applying (hopefully color-preserving) stretches. So what we see on the screen is always....incorrect?

Let's say we apply histogram transformations to the image data so that the brightness of the image content is satisfactory to our eyes as displayed. What happens then when we save/export as a JPG and assign the sRGB profile? Should the TRC still be manually applied, even though the image as displayed looks fine?

 

Yes, setting the color-managed space as linear will indeed allow you to see the mathematical transforms as they were intended. I did a bit of analysis on the MTF function to show the effect of ignoring the display gamma correction here:

 

https://www.cloudyni...mes/?p=12992666

 

And indeed, when ignoring gamma, asinh becomes a saturation boosting transformation—which I've found many astrophotographers to have confused for the definition of color preserving. Asinh on linear data maintains the color proportions or chroma—assuming you don't fiddle with the black point, which will increase the resulting proportions.

 

In the same thread, Mark showed how data is normally stored in the image file in gamma-compressed sRGB:

 

https://www.cloudyni...mes/?p=12985371

 

But you can store the data in whatever scheme you want, so long as the embedded icc is consistent. Not all extant systems can handle this deviation from the norm, though—so linear data may get displayed as squared data (^2.2, so very dark) on some browsers and image processors.

 

BQ


Edited by BQ Octantis, 27 April 2025 - 06:11 AM.


#35 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 27 April 2025 - 04:59 AM

Incidentally, Siril 1.4.0 Beta 1 was released yesterday.   

 

Requires Mac OS 11.3 or later…bawling.gif

 

EDIT: Has anyone tried OpenCore?

 

BQ


Edited by BQ Octantis, 27 April 2025 - 05:14 AM.


#36 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 27 April 2025 - 03:29 PM

Thanks for that distinction, Frankie! Per your clarification, I think I use a pedestal for various transforms with Colour-Science for Python. Some of the functions don't like zeroes or negative values, so I truncate to 1/65535 instead of 0. That's not a bias…but is that a pedestal?

 

BQ

No, I wouldn't call that a pedestal because it isn't a global adjustment. A pedestal is functionally similar to a camera offset in that a single value it's applied to all pixels. However, a pedestal is applied by the image processing software, typically during calibration, rather than by the camera hardware. Pedestals are not commonly needed when processing broadband images. More details here.



#37 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 27 April 2025 - 04:17 PM

The default working space of all image processing apps is gamma-compressed sRGB. Even though raw data is linear, when you place linear data in gamma-compressed space, the display chain will gamma correct it, i.e., (RGB)^2.2. Since the RGB values are on the interval [0,1], they get smaller when squared.

BQ, I don't really understand what you mean here. In colour managed software, the default working profile is rarely gamma compressed sRGB. For example, the default working profiles for Rawtherapee and darktable are linear ProPhoto and Rec2020 respectively. Non-colour managed software, like Siril v1.2.6, don't use colour profiles at all so data isn't transformed to any particular colour space and that's the root of the problem—image data is sent to the display without being correctly transformed to the display's colour profile.



#38 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 27 April 2025 - 06:22 PM

Here is a very nice color from the Cloudy Nights text color picker:

 

#cc3399

 

Working out the hex, that's

 

R = 0xCC = 12*16^1 + 12*16^0 = 204

G = 0x33 =  3*16^1 +  3*16^0 = 51

B = 0x99 =  9*16^1 +  9*16^0 = 153

 

The color picker in Photoshop—a highly color managed imaging app—also says that translates to RGB coordinates of (204,51,153):

 

Screen Shot 2025-04-27 at 7.20.03 PM.png

 

If I save that color to a TIFF file with the default Photoshop settings, RawTherapee (with the Working Profile set to ProPhoto) reports the RGB color as (80%, 20%, 60%). Multiplying those percentages by 255, I get (204,51,153)—and the L*a*b* coordinates also match those from Photoshop:

 

Screen Shot 2025-04-27 at 7.29.20 PM.jpg

 

Siril—this version being non color managed—agrees, too…(204.0, 51.0, 153.0):

 

Screen Shot 2025-04-27 at 7.39.07 PM.jpg

 

In what color space are those fully consistent RGB coordinates? question.gif


Edited by BQ Octantis, 27 April 2025 - 07:22 PM.


#39 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 27 April 2025 - 07:41 PM

In what color space are those fully consistent RGB coordinates? question.gif

The answer is sRGB because that's the colour space of your colour pickers in Photoshop and RawTherapee. Set your colour picker to the working profile in RawTherapee (with the Working Profile set to ProPhoto) and tell me what values you get.



#40 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 27 April 2025 - 08:02 PM

Set your colour picker to the working profile in RawTherapee (with the Working Profile set to ProPhoto) and tell me what values you get.

That would no longer be default settings, which was my point.



#41 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 28 April 2025 - 07:58 AM

I should point out that the output from the screen to your eyes is linear. I use the term gamma compressed sRGB to simplify what it takes to remove the sRGB bit compression (aka "TRC", tone reproduction curve) from what's passed to the display chain to get to the underlying linear primary (BT.709/sRGB) values of the displayed JPEG-out-of-the-camera you know and love:

 

Accurate but tedious:

 

R_linear = iif(R<=0.04045,R/12.92,((R+0.055)/1.055)^2.4)

G_linear = iif(G<=0.04045,G/12.92,((G+0.055)/1.055)^2.4)

B_linear = iif(B<=0.04045,B/12.92,((B+0.055)/1.055)^2.4)

 

Simple:

 

R_linear = R^2.2

G_linear = G^2.2

B_linear = B^2.2

 

The difference between sRGB and a simple gamma compression of 2.2 after the display chain performs a gamma correction of 2.2 is subtle but perceptible. Effectively, the sRGB TRC is an s-curve contrast enhancement to the linear output, with the tipping point at about 12%:

 

gammas.png

 

If you just dump (r_linear,g_linear,b_linear) out of the camera straight into the sRGB-compressed color space (which is what happens when you do nothing but color balance), you get

 

R_linear = r_linear^2.2

G_linear = g_linear^2.2

B_linear = b_linear^2.2

 

I'll leave the exercise of removing the sRGB TRC as an exercise to the reader. wink.gif

 

Cheers,

 

BQ


Edited by BQ Octantis, 28 April 2025 - 08:03 AM.


#42 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 28 April 2025 - 10:57 PM

OK I am getting a better hang of this. Let me see if I can summarize, please step up if you see something off (I am skipping the steps of calibration, CCM, white balance, etc.)

  • An image in the standard sRGB color space has a tone response curve (TRC) applied to the (originally) linear light data. This has altered the stored RGB data values of the pixels. It stores the (originally linear) data more efficiently for human eye consumption.

Linear stage

  • When working in a standard sRGB profile, and viewing linear data without any color profile assigned, the software assumes that the RGB data have the TRC of that working profile already baked in (they will be displayed wrong, but probably does not matter as much due to the extreme stretching that is coming up).
     
  • To enable accurate display of the data, one could:

1. Assign a linear profile to the image at this linear stage (Siril recommends that). This will keep the RGB data intact, but the displayed colors will change (as the software now knows it's linear and will not try to undo a TRC curve).
 

2. Work in a linear color space for the imaging software - The software will then treat the untagged linear data as linear.
 

3. Have a square root preview enabled (in a standard sRGB working profile). This almost cancels out the assumption of the software that the data have the TRC sRGB baked into them. 

 

Nonlinear stage

 

The purpose here is to i) apply the nonlinear stretches to visually enhance the image, as well as ii) the nonlinear transform of the (let's say sRGB) color space for exporting. Below are 3 example workflows, along with what I understand happens along the way. 

 

A0. Start with image 1 above, i.e. linear data with an assigned linear profile in a standard sRGB working color space.

A1. Convert (not assign!) the color profile of the image with linear sRGB profile assigned (option 1 above) to standard sRGB. The RGB data will change (sRGB TRC), however the displayed colors will not change (the software knows the image has TRC). 

A2. Stretches are applied (hopefully color preserving!). New RGB pixels values are written in the image, and what is displayed accurately represents the nonlinear image data. The stretches are applied to the TRC-encoded RGB values.

A3. sRGB Export. The final image should not change brightness or color, as it already has baked in the sRGB TRC and the software new about it.

 

Workflow A above always displays accurately but the stretches are applied on TRC-encoded data (less accurate).


B0. Start with linear data but unassigned color profile (sRGB working color space).

B1. Stretches are applied to the linear data. Displayed image is off, as the imaging software treats the data as TRC encoded.

B2. Manually (pixelmath) apply the sRGB TRC to the image data. This will alter the RGB values and bake in the sRGB TRC. The image appearance will change, but now (finally) the displayed colors will be accurate (the software was probably treating the data the same as its sRGB working space, but now the TRC has actually been applied).

B3. Export (along with an sRGB profile assignment). 

 

Workflow B above applies the stretches in linear data (good) but the software thinks the data were TRC-encoded, and the stretches happen with inaccurate color display.

 

 

C0. Start with linear data in a linear sRGB working space (image could also be assigned a linear color profile)

C1. Stretches applied to the data. Displayed image is accurate.

C2. Convert the image's color profile from linear sRGB to standard sRGB. The pixel values are rewritten to bake in the TRC, but the displayed image does not change.

C4. Export (no change as the sRGB tag is already applies and the data are TRC encoded).

 

Workflow C seems the best? Stretches happen in linear space, and the display is always accurate.

 

 

I'm exhausted. 



#43 martz

martz

    Explorer 1

  • -----
  • Posts: 82
  • Joined: 26 Aug 2020

Posted 29 April 2025 - 04:04 AM

OK I am getting a better hang of this.  I think you are as well; any imprecision I believe is not in your general understanding but in the fact that this topic is difficult to express in terminology that others will perceive as you've meant it.  For instance, when you refer to "software" I would only caution that it depends on which software because the one used may treat data differently in terms of display. 

 

All that remains is to test through practice in your chosen editor, which should have its own documentation on the topic.  

 

I'm exhausted.  I sympathize; we're all learning grin.gif 


  • timaras and FrankieT like this

#44 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 29 April 2025 - 05:58 AM

That is exhausting.

 

If you want to color manage in Siril (or PI), you should assign a linear color space to the data—which the display chain should convert to your display profile. Everything else is just workarounds—which with Siril 1.4 is unnecessary. The stretch mathematics are only accurate in linear space—and asinh is the only stretch that preserves color proportions.

 

Alternatively, you could treat luminance and chromaticity separately. This is what I do. Once my L is good enough on screen (I work in square-root mode for the much more accurate math, but convert and save to gamma-compressed sGray, i.e, the output image profile), I remove the cctf, apply xy (from the atmosphere-corrected CCM applied to the camera primaries), convert to sRGB, truncate to [0,1], and reapply the cctf. And Bob's your uncle.

 

BQ

 

P.S. I dispensed with the sRGB TRC long ago. The display chain between the gamma-compressed sRGB to the photons off the screen is just gamma 2.2. It has no idea how the data was compressed.

 

P.P.S. Note that Starnet++ is designed for 16-bit gamma-compressed sRGB TIFFs.


Edited by BQ Octantis, 29 April 2025 - 06:13 AM.

  • timaras likes this

#45 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 29 April 2025 - 06:00 AM

That would no longer be default settings, which was my point.

For the record, I get (148,70,128) for ProPhoto in RawTherapee and even that is not correct because the colour picker does not perform a chromatic adaptation between colour spaces. The point is that the colour picker module operates in it's own colour space and reports pixel values at the end of the processing pipeline. It is usually set to sRGB by default but it can be changed to any other colour space without affecting the working profile (linear ProPhoto in this case). Take care not to infer details about the working space of an image processing app purely based on the results of the colour picker. For example, the statistics module in your version of Sirl reports 204,51,153 simply because those are the values in the input file. They are just integers to Siril—it's not colour managed and knows nothing about colour spaces so it's meaningless to state that working space is sRGB.



#46 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 29 April 2025 - 06:12 AM

For the record, I get (148,70,128) for ProPhoto in RawTherapee and even that is not correct because the colour picker does not perform a chromatic adaptation between colour spaces. The point is that the colour picker module operates in it's own colour space and reports pixel values at the end of the processing pipeline. It is usually set to sRGB by default but it can be changed to any other colour space without affecting the working profile (linear ProPhoto in this case). Take care not to infer details about the working space of an image processing app purely based on the results of the colour picker. For example, the statistics module in your version of Sirl reports 204,51,153 simply because those are the values in the input file. They are just integers to Siril—it's not colour managed and knows nothing about colour spaces so it's meaningless to state that working space is sRGB.

I totally get it. But sometimes you have to be forceful in a statement (hyperbole?) to make a point understood.

 

I would wager >99% of all astroimagers are oblivious to the fact that their workflow is entirely non-color-managed, performed in a [0,1] linear workspace that is dumped straight into the display buffer that assumes it is sRGB. And they save their images to JPEGs that without an icc the entire digital Universe then must assume to be sRGB. They then claim that their asinh-stretched image is color preserved and natural or true color. rolleyes.gif

 

In 2025, what we should be talking about is how to expand the output gamut to take advantage of the enhanced cyans of the now-ubiquitous DCI-P3…and even start migrating the digital Universe to that format. W3C has done this for HTTP…but my Android phone still clamps everything to sRGB. mad.gif


  • FrankieT likes this

#47 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 29 April 2025 - 05:49 PM

@timaras, 

 

Color management concepts become somewhat easier to understand once you grasp the basic terminology. Color-managed software (CMS) typically relies on four key (.icc) profiles:

  1.     Input profile: Defines the color space of the incoming data.
  2.     Working profile: Specifies the internal color space used by the software’s processing modules.
  3.     Display profile: Represents the color space of the monitor.
  4.     Output profile: Indicates the color space of the data saved to a file.

The basic CMS workflow can be described as follows. When an image is loaded, the software converts the data, as described by the input profile, to the working color space. This working space is usually linear with a wide gamut, such as Rec2020 or ProPhoto, serving as a bridge between various image processing modules, which may or may not share the same working space. For display, the software transforms the working data using the display profile, leaving the original working data unchanged. The monitor then applies an additional transformation before rendering the image. When saving to file, the working data is encoded based on the output profile, and the CMS embeds the corresponding .icc profile to the metadata.

 

I've made some comments below based on my basic understanding of the profiles and workflow described above. I hope it helps clear up some confusion.

 

OK I am getting a better hang of this. Let me see if I can summarize, please step up if you see something off (I am skipping the steps of calibration, CCM, white balance, etc.)

  • An image in the standard sRGB color space has a tone response curve (TRC) applied to the (originally) linear light data. This has altered the stored RGB data values of the pixels. It stores the (originally linear) data more efficiently for human eye consumption. 

Linear stage

  • When working in a standard sRGB profile, and viewing linear data without any color profile assigned, the software assumes that the RGB data have the TRC of that working profile already baked in (they will be displayed wrong, but probably does not matter as much due to the extreme stretching that is coming up). When the input file lacks a colour profile, many CMS use sRGB as the default input profile. The CMS then converts the assumed sRGB data to the working color space, which may differ from sRGB. If the working profile is linear, as is common, the CMS applies the inverse of the sRGB TRC during conversion. If the input data is actually linear, the working data will then be non-linear and not display correctly, despite the linear working profile. Darktable and Rawtherapee are examples of colour-managed software that use a linear working profile by default. However, the colour managed version of Siril (v1.4.0 beta) defaults to an sRGB working profile. In this case, no transformation is applied to the linear input data since the input and working profiles are both sRGB. The working data will remain linear, despite the sRGB working profile, so the image will still not display correctly.
  •  
  • To enable accurate display of the data, one could:

1. Assign a linear profile to the image at this linear stage (Siril recommends that). This will keep the RGB data intact, but the displayed colors will change (as the software now knows it's linear and will not try to undo a TRC curve). Yes, this is the rigorous colour-managed approach. Be sure to set the display mode to "Linear" to visualize the image correctly.
 

2. Work in a linear color space for the imaging software - The software will then treat the untagged linear data as linear. No, I don't think this approach will not work (see above)
 

3. Have a square root preview enabled (in a standard sRGB working profile). This almost cancels out the assumption of the software that the data have the TRC sRGB baked into them. This workflow should work in Siril provided the default working profile settings have not been changed. However, this approach is sub-optimal, prone to error and defeats the purpose of using a CMS. It's also unnecessary, simply use the rigorous approach recommended in (1).

 

Nonlinear stage

 

The purpose here is to i) apply the nonlinear stretches to visually enhance the image, as well as ii) the nonlinear transform of the (let's say sRGB) color space for exporting. Below are 3 example workflows, along with what I understand happens along the way. 

 

A0. Start with image 1 above, i.e. linear data with an assigned linear profile in a standard sRGB working color space.

A1. Convert (not assign!) the color profile of the image with linear sRGB profile assigned (option 1 above) to standard sRGB. The RGB data will change (sRGB TRC), however the displayed colors will not change (the software knows the image has TRC). Typically, a CMS will prompt the user or automatically convert the linear input data to the working colour space so A0 and A1 are done in a single step.
A2. Stretches are applied (hopefully color preserving!). New RGB pixels values are written in the image, and what is displayed accurately represents the nonlinear image data. The stretches are applied to the TRC-encoded RGB values.

A3. sRGB Export. The final image should not change brightness or color, as it already has baked in the sRGB TRC and the software new about it.

 

Workflow A above always displays accurately but the stretches are applied on TRC-encoded data (less accurate). Correct, but I wouldn't recommend this approach. A better workflow is to set a linear working profile rather than sRGB. The image will still display correctly provided the appropriate display profile for your monitor is set correctly of course.

B0. Start with linear data but unassigned color profile (sRGB working color space).

B1. Stretches are applied to the linear data. Displayed image is off, as the imaging software treats the data as TRC encoded.

B2. Manually (pixelmath) apply the sRGB TRC to the image data. This will alter the RGB values and bake in the sRGB TRC. The image appearance will change, but now (finally) the displayed colors will be accurate (the software was probably treating the data the same as its sRGB working space, but now the TRC has actually been applied).

B3. Export (along with an sRGB profile assignment). 

 

Workflow B above applies the stretches in linear data (good) but the software thinks the data were TRC-encoded, and the stretches happen with inaccurate color display. This workflow  works when the CMS uses sRGB as the default input profile and the colour space of your monitor is also sRGB.

 

 

C0. Start with linear data in a linear sRGB working space (image could also be assigned a linear color profile). An appropriate linear colour profile should be assigned to the input file otherwise a conversion from sRGB might take place.

C1. Stretches applied to the data. Displayed image is accurate.

C2. Convert the image's color profile from linear sRGB to standard sRGB. The pixel values are rewritten to bake in the TRC, but the displayed image does not change. This step might not be necessary because a CMS usually lets you specify the output profile. For example, Siril lets you specify the output profile for 8-bit and high-bit files in the preferences.

C4. Export (no change as the sRGB tag is already applies and the data are TRC encoded).

 

Workflow C seems the best? Stretches happen in linear space, and the display is always accurate. I think C is the best workflow when using a CMS. You might also consider using a linear working profile with a larger gamut, e.g Rec2020, rather than sRGB.

 

 

I'm exhausted. I think you might be overthinking this a little. smile.gif

 


  • timaras likes this

#48 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 29 April 2025 - 06:19 PM

I totally get it. But sometimes you have to be forceful in a statement (hyperbole?) to make a point understood.

OK, got it now.

 

I would wager >99% of all astroimagers are oblivious to the fact that their workflow is entirely non-color-managed, performed in a [0,1] linear workspace that is dumped straight into the display buffer that assumes it is sRGB. And they save their images to JPEGs that without an icc the entire digital Universe then must assume to be sRGB. They then claim that their asinh-stretched image is color preserved and natural or true color. rolleyes.gif

Another aspect often overlooked is that colour management is also about maintaining colour consistency across devices so that printed colours look the same as they appear on the display or I view your image as you intended. I understand that colour management is not for everyone but I think it's great what Siril team has manged to achieve.



#49 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 29 April 2025 - 06:43 PM

and asinh is the only stretch that preserves color proportions.

I am only seeing asinh around but any stretch that multiplies the three R,G,B values of a pixel by the same number should preserve color. The asinh partjust determines what that value is depending on the brightness of the pixel, but it is just one of many possible choices. asinh says "leave the darks intact but compress the brights" (similar the sRGB tone response curve).


Edited by timaras, 29 April 2025 - 06:44 PM.


#50 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,717
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 29 April 2025 - 06:50 PM

I am only seeing asinh around but any stretch that multiplies the three R,G,B values of a pixel by the same number should preserve color. The asinh partjust determines what that value is depending on the brightness of the pixel, but it is just one of many possible choices. asinh says "leave the darks intact but compress the brights" (similar the sRGB tone response curve).

True…if you define changing the gain as a "stretch". That is not what MTF or GHT do. Applying some other gamma than the screen gamma also distorts colors.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics