Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

DSLR/Mirrorless Consistent Color Processing

  • Please log in to reply
436 replies to this topic

#51 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 29 April 2025 - 06:55 PM

 

OK I am getting a better hang of this.  I think you are as well; any imprecision I believe is not in your general understanding but in the fact that this topic is difficult to express in terminology that others will perceive as you've meant it.  For instance, when you refer to "software" I would only caution that it depends on which software because the one used may treat data differently in terms of display. 

 

All that remains is to test through practice in your chosen editor, which should have its own documentation on the topic.  

 

I'm exhausted.  I sympathize; we're all learning grin.gif 

 

It's true that different software may implement the colour management framework differently including the default settings etc, so it's certainly good advice to read the documentation. However, colour-managed software should at least handle the data consistently, provided it has been implemented correctly of course. If you aren't referring to a CMS, then I agree, all bets are off.


  • timaras likes this

#52 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 29 April 2025 - 07:13 PM

I am only seeing asinh around but any stretch that multiplies the three R,G,B values of a pixel by the same number should preserve color.

You might also want to look at GHS with the "Equal Weighted Luminance" stretching model in Siril, which is also designed to preserve colour proportions.


  • timaras likes this

#53 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,744
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 29 April 2025 - 07:25 PM

You might also want to look at GHS with the "Equal Weighted Luminance" stretching model in Siril, which is also designed to preserve colour proportions.

Oooo…I'll have to run it through the gamut of color tests! wink.gif

 

BQ


Edited by BQ Octantis, 30 April 2025 - 05:48 AM.


#54 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 30 April 2025 - 04:01 AM

Incidentally, Siril 1.4.0 Beta 1 was released yesterday.  I again encourage you to review its color management documentation because it will help your understanding of all this.  For instance, its workflow page has the following suggestion regarding what to do once you get to a nonlinear stage:

 

When you're ready to stretch your image, it's time to think about your color space again. Stretching changes the image from linear data to non-linear data so that it looks pleasing to the human eye. You're going to make your data non-linear now, so before stretching is a good time to convert the image to your chosen nonlinear color space, be it sRGB or Rec2020 or another color space of your preference

This is odd advice. In general, stretching should be performed on linear data; however, converting the data to sRGB or Rec2020 before stretching will apply a non-linear TRC. It would be more accurate to keep a linear working profile for the processing pipeline and apply the display profile only to the output data that is passed to the monitor.
 


  • martz likes this

#55 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,744
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 30 April 2025 - 05:51 AM

This is odd advice. In general, stretching should be performed on linear data; however, converting the data to sRGB or Rec2020 before stretching will apply a non-linear TRC. It would be more accurate to keep a linear working profile for the processing pipeline and apply the display profile only to the output data that is passed to the monitor.

At what point would you apply the CCM? As I've mentioned before, most AP functions want their inputs on the interval [0,1]—Siril 1.4 beta even complained to me that my raw Carina stack has negative values and offered to fix it in multiple ways…

 

BQ

 

P.S. I got Mac OS Monterey running on my 2009 Mac mini—the oldest mini supported by OpenCore. 8 GB of RAM are on their way here now! laugh.gif



#56 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 30 April 2025 - 06:02 AM

This is odd advice. In general, stretching should be performed on linear data; however, converting the data to sRGB or Rec2020 before stretching will apply a non-linear TRC. It would be more accurate to keep a linear working profile for the processing pipeline and apply the display profile only to the output data that is passed to the monitor.
 

Indeed. I think Siril assumes (probably correctly) that most people will not bother to work on a linear working profile. Then the options are to i) perform the stretches in TRC-converted data (but will display consistently), or ii) assign the sRGB at the end (but then the images during stretching are displayed incorrectly until that point). They seem to be gearing people towards the former.


  • martz likes this

#57 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 30 April 2025 - 08:36 AM

Indeed. I think Siril assumes (probably correctly) that most people will not bother to work on a linear working profile. Then the options are to i) perform the stretches in TRC-converted data (but will display consistently), or ii) assign the sRGB at the end (but then the images during stretching are displayed incorrectly until that point). They seem to be gearing people towards the former.

In order to convert to sRGB or Rec2020 at that point in the workflow when the data is still linear, i.e before stretching, the user must:

  1. assign a (linear) colour profile, if one hasn't already been assigned
  2. convert to sRGB or Rec2020.

Two things, (i) the user is already working in a linear space by default, even if a linear profile wasn't explicitly assigned and (ii) the user has to first assign a (linear) colour profile anyway, otherwise it's not possible to convert to another colour space. Once the linear profile is assigned, the image will display correctly.

 

It would be more logical, and accurate, to recommend that the user just assign a linear colour profile, since they need to do this step anyway, and omit the conversion to a non-linear profile, which seems unnecessary.

 

Another related oddity, Siril will auto assign the working colour profile (sRGB by default) on stretching, which means that the data will not display correctly after a stretch if the working profile is not linear. I assume that this workflow maintains the legacy behavior of previous non-colour-managed versions of Siril.

 

These default settings can be changed in the preferences so it's not really a problem, but something to be aware of.


  • martz likes this

#58 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 30 April 2025 - 08:58 AM

At what point would you apply the CCM? As I've mentioned before, most AP functions want their inputs on the interval [0,1]—Siril 1.4 beta even complained to me that my raw Carina stack has negative values and offered to fix it in multiple ways…

 

BQ

 

P.S. I got Mac OS Monterey running on my 2009 Mac mini—the oldest mini supported by OpenCore. 8 GB of RAM are on their way here now! laugh.gif

Good question, not sure if I have a good answer. Somewhere after debayer and white balance but before stretching. Normally, I apply it after white balancing but there might be some valid reasons to apply the ccm after background extraction.



#59 martz

martz

    Explorer 1

  • -----
  • Posts: 82
  • Joined: 26 Aug 2020

Posted 30 April 2025 - 10:38 AM

Good question, not sure if I have a good answer. Somewhere after debayer and white balance but before stretching. Normally, I apply it after white balancing but there might be some valid reasons to apply the ccm after background extraction.

Hello Frankie and BQ,

 

My understanding from reading BQ's past posts (here is one) is to white balance before background extraction, and since background extraction is a nonlinear operation (?), it should be done after the color processing steps that should be performed on linear data, correct?  Since the order of operations matters, could you please comment on your recommended order -- assuming, of course, that the intent is to arrive at an image that reasonably approximates the captured objects in terms of color fidelity?

 

Thank you.  



#60 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 30 April 2025 - 04:50 PM

Hello Frankie and BQ,

 

My understanding from reading BQ's past posts (here is one) is to white balance before background extraction, and since background extraction is a nonlinear operation (?), it should be done after the color processing steps that should be performed on linear data, correct?  Since the order of operations matters, could you please comment on your recommended order -- assuming, of course, that the intent is to arrive at an image that reasonably approximates the captured objects in terms of color fidelity?

 

Thank you.  

Hello martz,

 

You are correct, the CCM should be applied to linear data so order matters. I suggest the following:

 

WB → BE → CCM

 

A couple of comments:

 

Strictly, colour correction should be performed before white balance (WB), but this isn't what occurs in practice. The CCM for a DSLR camera, like those from DxOMark, is designed to be applied to white balanced data.

 

Generally, background extraction (BE) is a linear operation that (in theory) yields a neutral background reference for accurate colour correction, provided it is done correctly of course.

 

The order is just a suggestion for the typical case rather than a definitive statement for all images. This topic can get quite complicated. For example, in the presence of strong gradients, it might be better, even necessary, to remove background before white balancing.


Edited by FrankieT, 30 April 2025 - 05:50 PM.

  • timaras and martz like this

#61 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 03 May 2025 - 08:47 AM

As an aside, I have been stacking with Astro Pixel Processor because it does apply the white balance and CCM to the camera raw files (simple checkboxes). The background subtraction tool quite nice. Very happy with that software.



#62 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 05 May 2025 - 02:42 PM

A simpler way to look at it is that in your example you have linear data but you've told the system it is sRGB.  Therefore it will be displayed incorrectly.  That's all you need to know.  Don't worry about which processing steps happen at which points in the display chain because I don't know what the sequence of steps is, either.

 

This was bugging me ever so slightly, so run some tests to confirm. I had the colorchecker handy and waited for the midday in a sunny day, to capture a RAW (+1 bias frame) with my Canon R6 (EF 24-70mm @70mm).

 

I used Astro Pixel Processor to apply the sRGB white balance and Color Correction Matrix in the debayered file (after bias subtraction), then saved it in 16bit integer linear TIFF and I opened in Photoshop. After the process the files were converted to gamma sRGB and saved as PNG.

 

As a reference, this is the image without WB and CCM (linear sRGB profile assigned):
 

sRGBg10 sRGBg24 sRGB (no CCM)

 

With WB and CCM applied, in Photoshop with the default gamma-sRGB working profile, the image appears dark as the software assumes the data have the sRGB tone response curve applied, and hence inverts the TRC for display, squeezing the linear data towards 0:

 

sRGBg24 sRGBg24 sRGB
 
When assigning a linear sRGB profile to the image, it is displayed correctly (we have told the software that the data is linear, so will not apply any inverse TRC):
 
sRGBg10 sRGBg24 sRGB
 

If we set a linear sRGB working space, the image is displayed the same as earlier (whether we assign a linear profile or not to the image):

 

sRGBg10 sRGBg10 sRGB
 
I now understand that:
  • The no1 priority is to assign a linear color profile to the image with the linear data, so that the color managed software can display it correctly. 
  • Having a linear working space will not alter the image display, but ensures that the calculations we apply to the image are more accurate.

 

Below is the result of processing the same photo in Lightroom Classic. Close enough but still looks better. 

 

Lightroom

 


Edited by timaras, 05 May 2025 - 02:50 PM.

  • FrankieT and martz like this

#63 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 8,294
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 05 May 2025 - 03:28 PM

Lightroom default is to boost brightness and saturation. It is possible to force it to use neutral processing by selecting a much earlier processing version in the options (e.g from 2010) and setting Brightness, Black and Contrast sliders to zero and the tone curve to linear.
  • timaras and FrankieT like this

#64 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 05 May 2025 - 04:03 PM

I now understand that:
  • The no1 priority is to assign a linear color profile to the image with the linear data, so that the color managed software can display it correctly. 
  • Having a linear working space will not alter the image display, but ensures that the calculations we apply to the image are more accurate.

I think you've got it!

 

 


Below is the result of processing the same photo in Lightroom Classic. Close enough but still looks better. 

 

Most raw converters apply a non-linear tone curve to the raw file by default to mimic the appearance of the camera jpg image. This tone curve shouldn't be confused with the sRGB transfer function, which you have applied to your images above. Normally, it's possible to disable this tone curve or switch to a linear profile, which should then provide a better match to your reference image.

 

[EDIT: My post crossed Mark's. I think we are saying the same thing]
 


Edited by FrankieT, 05 May 2025 - 04:06 PM.

  • sharkmelley and timaras like this

#65 pedit

pedit

    Sputnik

  • -----
  • Posts: 30
  • Joined: 04 Jan 2019

Posted 07 May 2025 - 03:15 AM

I use Lightroom to process my star trail sequences and start with neutral processing as suggested in the previous posts. Iliah Borg offers a detailed discussion on this approach at the link below.

https://www.rawdigge...tments-settings

Joe


  • martz likes this

#66 wongataa

wongataa

    Explorer 1

  • -----
  • Posts: 76
  • Joined: 29 Jan 2023
  • Loc: UK

Posted 07 May 2025 - 06:20 AM

I think you've got it!

 

 

Most raw converters apply a non-linear tone curve to the raw file by default to mimic the appearance of the camera jpg image. This tone curve shouldn't be confused with the sRGB transfer function, which you have applied to your images above. Normally, it's possible to disable this tone curve or switch to a linear profile, which should then provide a better match to your reference image.

 

[EDIT: My post crossed Mark's. I think we are saying the same thing]
 

Internally LR/ACR use a working colour space (for editing) that has the same primaries as ProPhotoRGB but has a gamma of 1 - it is linear. On top of that there are various profiles that can be applied that can change colours and contrast but they are nothing to do with the working colour space.

 

When you export an image it will be converted into the colour space (and usual tone curve for that colour space) specified in the settings.



#67 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 07 May 2025 - 11:21 AM

Internally LR/ACR use a working colour space (for editing) that has the same primaries as ProPhotoRGB but has a gamma of 1 - it is linear. On top of that there are various profiles that can be applied that can change colours and contrast but they are nothing to do with the working colour space.

 

When you export an image it will be converted into the colour space (and usual tone curve for that colour space) specified in the settings.

Yes, that's how most colour-managed raw converter software function, see also post #47. In case it's not clear, the non-linear tone curve I mention in the post that you referenced has nothing to do with the working profile or the output profile. Darktable refer to these as basecurve presets while Rawtherapee incorporates non-linear tone curves in their default processing profiles.


Edited by FrankieT, 07 May 2025 - 04:59 PM.

  • timaras likes this

#68 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 08 May 2025 - 05:37 PM

In the spirit of the title of this topic, I wanted to summarize a 10-step workflow (based on the earlier discussions here) to get to a "consistent" color for a stacked frame, i.e. apply the camera-specific white balance and color conversion matrices. The goal is to reach a stage where the color is in line with a pleasant/accurate terrestrial image, if the same process was followed.

 

This is meant to be done after stacking (and potentially background extracted) but still in a linear stage for the data.

 

There is not much new here in terms of process, but I personally could not easily find all the details and nuances together in one place to follow. Tested in a Canon R6. It might seem like a lot, but it only needs to be done once per camera, ever.

 

This can be done in 5' in e.g. Astro Pixel Processor, which includes camera specific white balance and color space conversions. The issues I am trying to further improve on are:

 

- Not be tied to AstroPP stacking tools. They are good but I wanted to be able to incorporate a more generic workflow, e.g. for Siril or Pixinsight.

- Not be limited to AstroPP or DxO camera matrices. For example, DxO often supplies camera matrices for D50 illuminant, which is warmer compared to D65 (daylight noon), hence the process often produces bluer hues. The workflow extracts the info directly from the raw files.

 

Tools needed: exif tool, Adobe DNG converter, chatgpt (for calculations), pixelmath (e.g. in Siril) for the conversions.

 

1. Take a RAW camera file (e.g. pic.CR3) and convert to Adobe DNG.

 

2. Run exiftool on the pic.DNG file and save the output to a text file.

 

For example in a Mac terminal the command is exifTool -a -u -g1 pic.DNG > pic_metadata.txt

 

3. Parse the txt file and extract the parameters below (you can eyeball them, ask chatgpt to retrieve them, or search for the terms in a text editor). The values are examples of what the results are for my R6.

 

Calibration Illuminant 2: D65

Color Matrix 2             : 0.8293 -0.1611 -0.1132 -0.4759 1.2711 0.2275 -0.1013 0.2415 0.5509
Camera Calibration 2  : 1.0037 0 0 0 1 0 0 0 1.0242
Analog Balance          : 1 1 1
As Shot Neutral           : 0.530295 1 0.641604

The first 2 items are 3x3 matrices (the rows in sequence) for D65. ColorMatrix2 converts from the XYZ color space to the reference camera RGB. CameraCalibration2 is a minor tweak that converts from the reference camera color space to the color space of the specific camera in your hands (raw data).

 

We will also need the sRGB2XYZ matrix that converts from sRGB to XYZ color spaces for D65: 
​sRGB2XYZ = [0.4124564  0.3575761  0.1804375;  0.2126729  0.7151522  0.0721750; 0.0193339  0.1191920  0.9503041];

 

4. Calculate the White Balance diagonal matrix [wb1 wb2 wb3] = 1/AsShotNeutral * AnalogBalance (in this case, the 3 elements are wb1 = (1/0.530295)*1, wb2 =  (1/1)*1, wb3 = (1/0.641604)*1 ).

 

5. Use pixelmath to white balance the original image, which should go from R,G,B to R*wb1, G*wb2, B*wb3.

 

If you are using a terrestrial image, the pedestal must be subtracted (e.g. with a bias frame) and the data debayered before this step.

 

6. Calculate the (not normalized) color conversion matrix from sRGB to camera color space:

 

sRGB2cam = CameraCalibration2*ColorMatrix2*sRGB2XYZ

 

Note that the above is a matrix multiplication, the order is important and is best done by suitable software (pixelmath can do this, online tools can do this, I prefer chatgpt).

 

7. Normalise the sRGB2cam matrix rows by diving the elements of each row by the sum of the elements of that row.

 

This ensures that there is no further white shift. The new matrix can be called e.g. sRGB2cam_norm.

 

For example, if the elements of the first row are [a b c], the normalized row will be [a/(a+b+c) b/(a+b+c) c/(a+b+c)]. Repeat for the other two rows.

 

8. Invert the matrix (same tools as above) so that instead of converting sRGB to camera RGB, it converts camera data to sRGB.

 

This produces cam2sRGB_norm = inverse(sRGB2cam_norm). For the R6 example here, the elements of the matrix are:

M11 = +1.74; M12 = -0.84; M13 = +0.10;
M21 = -0.17; M22 = +1.65; M23 = -0.47;
M31 = +0.00; M32 = -0.60; M33 = +1.60;

As a comparison, DxO supplies the matrix for the R6 at D50:

2.12 -1.24 0.12;
-0.21 1.7 -0.49;
0.04 -0.65 1.61;
 

9. Multiply the white-balanced pixels with the matrix.

In pixinsight, this looks like:
​Red:    $T[0]*M11 + $T[1]*M12 + $T[2]*M13
​Green: $T[0]*M21 + $T[1]*M22 + $T[2]*M23
​Blue:    $T[0]*M31 + $T[1]*M32 + $T[2]*M33

 

10. Finally, assign a linear sRGB profile to the image, e.g. sRGB-elle-V4-g10.icc from Elle Stone, to ensure the data are interpreted as linear.

 

...

 

Once processing is done, convert (not assign!) the image to a normal sRGB profile for export.

Bonus points if you have a linear working profile, your edits will be ever so more accurate in linear space.

 

 

As a comparison, below is the image developed in Pixinsight using the DxO matrices. the neutral grey square is (R G B) = (0.42 0.50 0.55) after converting to gamma-stretched sRGB.
 

Dxo
 

The same image developed with the above steps yield (R G B) = (0.50 0.51 0.53).

 

Dng
 
 
Original debayered image (link to RAW file):

sRGBg10 sRGBg24 sRGB (no CCM)

 

 


Edited by timaras, 08 May 2025 - 05:47 PM.

  • martz likes this

#69 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 8,294
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 08 May 2025 - 06:46 PM

 

As a comparison, below is the image developed in Pixinsight using the DxO matrices. the neutral grey square is (R G B) = (0.42 0.50 0.55) after converting to gamma-stretched sRGB.

I don't understand why using DxO matrices the neutral grey square ends up as (0.42 0.50 0.55), which is rather blue.  What causes this blue?  It's not caused by the DxO colour correction matrix (CCM) since the DxO CCM rows each sum to zero, which has a neutral effect on white.


  • FrankieT likes this

#70 timaras

timaras

    Vostok 1

  • -----
  • topic starter
  • Posts: 142
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 09 May 2025 - 02:41 AM

I don't understand why using DxO matrices the neutral grey square ends up as (0.42 0.50 0.55), which is rather blue.  What causes this blue?  It's not caused by the DxO colour correction matrix (CCM) since the DxO CCM rows each sum to zero, which has a neutral effect on white.

I did not include it in the analysis but the WB matrices are also different: DxO (D50) is [1.85 1 1.64] while the DNG embedded one (D65) is  [1.89 1 1.56]. 


  • sharkmelley likes this

#71 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 8,294
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 09 May 2025 - 03:24 AM

One other point of interest is that Adobe's colour engine differs from DxO's colour engine.  The Adobe processing sequence assumes that the Adobe Hue/Saturation maps (found in the DNG header) will be applied after the Adobe colour matrix. So the Adobe colour matrix does a partial job of converting from the camera's colour space to CIE XYZ D50 and the remainder is performed by applying the camera's Adobe Hue/Sat adjustments.  This is something I have not tried to do (yet) but for some cameras it is necessary because for those cameras the Adobe colour matrix alone is visibly insufficient.  It's why I prefer to use the DxO matrix (when available).


Edited by sharkmelley, 09 May 2025 - 03:43 AM.


#72 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,744
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 09 May 2025 - 04:54 AM

If you have a MacBeth color checker, why are you not creating your own CCM?



#73 FrankieT

FrankieT

    Mariner 2

  • -----
  • Posts: 237
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 09 May 2025 - 10:06 AM

I did not include it in the analysis but the WB matrices are also different: DxO (D50) is [1.85 1 1.64] while the DNG embedded one (D65) is  [1.89 1 1.56]. 

Those white balance factors are for a D50 illuminant, but the captured illuminant of the scene is closer to daylight D65. For the best results, you ideally want a CCM optimized for the captured illuminant. However, camera CCMs, like those from DxO, are usually designed to be applied to white-balanced data, which makes them relatively stable under different illuminants—at least for a typical DSLR and relatively small changes in illuminants (e.g D65 and D50). Therefore, the DxO CCM should still yield a reasonably accurate/neutral result, but you need to white balance the image correctly for the captured daylight illuminant. Perhaps try the same white balance factors that you used with ColorMatrix2.


Edited by FrankieT, 09 May 2025 - 04:49 PM.


#74 wongataa

wongataa

    Explorer 1

  • -----
  • Posts: 76
  • Joined: 29 Jan 2023
  • Loc: UK

Posted 09 May 2025 - 10:53 AM

- Not be limited to AstroPP or DxO camera matrices. For example, DxO often supplies camera matrices for D50 illuminant, which is warmer compared to D65 (daylight noon), hence the process often produces bluer hues. The workflow extracts the info directly from the raw files.

Look at the source code for libraw.  In there you can find the matrices for your camera.

 

I would also not convert to sRGB.  Why limit your colour gamut when you have no need to?  Convert to a larger colour space.  All photo editing of any type should be carried out in a large working colour space, not the small sRGB.



#75 vidrazor

vidrazor

    Fly Me to the Moon

  • *****
  • Posts: 6,835
  • Joined: 31 Oct 2017
  • Loc: North Bergen, NJ, USA

Posted 09 May 2025 - 03:29 PM

I would also not convert to sRGB.  Why limit your colour gamut when you have no need to?  Convert to a larger colour space.  All photo editing of any type should be carried out in a large working colour space, not the small sRGB.

Because nobody looks at images in anything other then sRGB. If you color correct in that gamut, you will have the best shot at having people see your work as you intended.

 

Years ago I used to work in an ad agency where we had to match product color, and would always have problems when working off AdobeRGB or Profoto. We switched to sRGB, and the vast majority of our color work bottlenecks were eliminated.


  • primeshooter likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics