Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

DSLR/Mirrorless Consistent Color Processing

  • Please log in to reply
110 replies to this topic

#1 timaras

timaras

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 22 April 2025 - 06:49 PM

I have been trying to stack and stretch data from my Canon R6 and 80D while maintaining "accurate" color, and am seeking some clarification/advice on certain steps.

 

I have read through relevant material, specifically:

 

DSLR Processing - The Missing Matrix

Nightscape and Astrophotography Image Processing Basic Work Flow (R Clark) (+other pages there)

M31 Andromeda Galaxy in natural colour

"True color" again

 

The absolutely necessary keys steps seem to be:

 

- Calibration (even if it's just a bias subtraction)

- Demosaicing

- Stacking

- White balance

- Color correction matrix

- Color-preserving stretching

- sRGB tone response curve

 

 

I have used Siril, Astro Pixel Processor, and more recently Pixinsight. Considering now some of the key steps:

 

Color Correction Matrix

 

I understand that multiplying the debayered data with the CCM is a must-have step. So far I have managed to do this in i) Astro Pixel Processor (on by default), ii) converting with Rawtherapee to 16 bit TIFFs by using a custom linear tone curve (so that the data remain linear after conversion), iii) Pixelmath in Siril or PI. 

 

Options (i) and (ii) will apply the CCM to each photo before stacking. Option (iii) could be done in the lights or in the stacked image. My understanding is that so long the data are linear, it should not matter whether the CCM is applied before or after stacking. 

 

I also understand that using a "normal" RAW converter will not work, as the output data will be stretched by the tone response curve.

 

Finding (or measuring) the CCM is a dark art, we hope DxO has the matrix.

 

White Balance Matrix/Vector

 

Another matrix multiplication. The simplest approach is to use the Daylight White Balance numbers for the camera. I have also seen running the photometry color calibration will also have a similar effect. I understand what the tools are aiming to do (matrix multiplications so that the stars RGB colors match what's on a reference database), however it's nor clear to me whether they apply a white balance type multiplication, CCM-type multiplication, or a combination. Any ideas?

 

Color-Preserving Stretch

 

Again, I (think I) get the concept: stretch by multiplying the R,G,B values of a pixel by the *same* number (that depends on the luminance L of that pixel), as opposed to scaling the R,G,B channels separately. I have seen Pixinsight's ArcsinhStretch that conforms to that, using a function ~asinh(b*L)/(asinh(b)*L) . I presume it can be coded into pixelmath too. rnc-color-stretch uses (I think) a different function, L^(1/p). Are there other ways? Is the GeneralizedHyperbolicTransform compatible with this constraint?

 

Tone reponse curve - help!

 

This is my weakest point and I could use some education. There is usually a step to (manually) apply the tone response curve, e.g. for sRGB. I understand this is what a normal imaging software (or OS) does to adjust the image stored pixels before sending them to the monitor for display. Why is this step necessary? Doesn't the software (Pixinsight, Siril, Photoshop) apply the (let's say sRGB) tone response curve to display the stretched data on the screen, throughout this whole process? Then, when exporting the image to e.g. JPG, aren't these values baked into to the image file (along with the sRGB profile used)?


Edited by timaras, 23 April 2025 - 08:31 AM.


#2 vidrazor

vidrazor

    Fly Me to the Moon

  • *****
  • Posts: 6,767
  • Joined: 31 Oct 2017
  • Loc: North Bergen, NJ, USA

Posted 22 April 2025 - 08:20 PM

OK, reality check.

 

There is no such thing as "natural" color in astrophotographic image data processing, regardless of what anyone ever tells you.

 

Therefore, just process the image until it meets your criteria of what looks good to you.

 

That's it.


  • michael8554 and xonefs like this

#3 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 36,570
  • Joined: 27 Oct 2014

Posted 22 April 2025 - 08:30 PM

The absolutely necessary keys steps seem to be:
 
- Calibration (even if it's just a bias subtraction)
- Demosaicing
- Stacking
- White balance
- Color correction matrix
- Color-preserving stretching
- sRGB tone response curve

You're making this a lot harder than it needs to be, by including some less widely used (and less useful) steps, while completely omitting a FAR more important thing. Flats.

Absolutely necessary key steps

Calibration. MUST include flats (and bias to calibrate the flats).
Demosaicing
Stacking
Color correction of one sort or another. If you're using PI, some version of photometric color calibration works well. THIS COMPLETELY REPLACES "white balance", which is a terrestrial photography thing, a canned and rigid adjustment the three channels, which is useful FOR TERRESTRIAL light sources. Color calibration is the proper, highly adjustable tool, for astrophotography.
Color preserving stretching OR increased color saturation after stretching.

If you feel something further is necessary, PI's Curves Correction can tweak color just about any way you want.

Some people use some of the other things you listed. MANY others don't. It's all too easy in this business to be swayed by someone pushing their own idiosyncratic methods as "absolutely necessary", when they're not. At most, they're tweaks.

Edited by bobzeq25, 22 April 2025 - 08:42 PM.

  • timaras, vidrazor and triplemon like this

#4 wongataa

wongataa

    Explorer 1

  • -----
  • Posts: 75
  • Joined: 29 Jan 2023
  • Loc: UK

Posted 23 April 2025 - 01:57 AM

I have used my Canon cameras for astro photography.  I have got good images that are accurate enough colour wise by doing the following:

Stacking and stretching in Astro Pixel Processor (using option to use the camera's white balance)

When stretching in APP using the remove light pollution option

In APP using the calibrate star colours option can be useful

 

Save as a 32 bit tiff using the ProPhotoRGB colourspace.

 

Open tiff in Photoshop and convert into my working space and then save (I use ProPhotoRGB, you would think that the export from APP would negate this step but for some reason its ProPhotoRGB triggers the different colourspace warning and if you don't convert colours things don't look right).

 

Next I use Star Xterminator to get a starless version and a version with just the stars.

 

Then I will perform noise reduction on the starless image.

 

I do further processing in Lightroom.

 

Finally I combine the starless image with the stars in Photoshop.

 

This results in pleasing images to my eyes.

 

If you are using flat files you will need the bias value for your cameras.  You can find this in the makernotes section of the image metadata.  It is the black point value.  In Siril you can use this value directly.  In APP you can't do this so you need to convert a raw file to fit and then using software with pixel math (I have done it in Siril) convert every pixel to the bias value.  Now you have a master bias for software that requires bias frames.

 

Personally I don't feel the need to get more complex with CCMs etc.  If you do want to go this route I would advise not going anywhere near the sRGB colour space.  It is small and will quite possibly limit colours.  Use a wider working colour space so no colours are limited.  Of course finding the conversion matrices to go from the camera raw data to something like ProPhotoRGB is not simple.  You can find the conversions in the libraw libraries.  It is a two step process: camera to XYZ, XYZ to working colour space.


  • timaras likes this

#5 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 8,226
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 23 April 2025 - 04:18 AM

If you are using a stock camera for night-sky imaging then it is a worthy goal to process the data to give good colour reproduction i.e. to match the colours as far as possible to the way the human eye would see the scene if the eye's colour receptors had sufficient sensitivity.  After all, this is how images are processed for daytime terrestrial photography.  Such processing involves extra steps which are automatically applied when the camera produces JPG output or when the raw file is opened in Photoshop, Lightroom, RawTherapee or the camera manufacturer's own software but these steps are ignored by the typical astro-processing workflow.

 

Here are some brief answers to some of the questions you posed.

  • Photometric colour calibration and spectrophotometric colour calibration are sophisticated ways of applying the white balance by scaling the colour channels.  But they don't apply the relevant colour correction matrix.
  • The colour correction matrix is a transformation from the camera sensor's colour primaries to the colour primaries of the working colour space (typically sRGB).  It must be applied to linear data, before or after stacking.
  • In general, GeneralizedHyperbolicStretch is not colour preserving.  But the implementation is PixInsight allows you to set the Mode to "Colour" in the Colour Options panel which will scale the pixel's RGB channels by the same multiplier, thus preserving the pixel's colour.
  • The tone response curve of the colour space is an interesting problem.  The end goal is for the brightness of each pixel on the display to be proportional to the brightness of the corresponding point in the scene being imaged.  But the display chain for non-colour-managed images assumes that the image is sRGB, including the sRGB tone response curve (the variable gamma curve) and so the display chain "undoes" this transformation before displaying it on screen.  The net result is that linear data is not displayed correctly because the display chain is stretching the data, making it appear far too dark and contrasty.  The best way to avoid this is use a linear working profile or if the processing software is not colour-managed you can apply the TRC (tone response curve) to the data.  But if you apply a TRC that is variable gamma (like for sRGB) then subsequent colour preserving stretches will actually distort colour because brightening the pixel moves it to a part of the TRC where the gamma is different.  However a colour space such as AdobeRGB has a constant gamma TRC and so subsequent colour preserving stretches are OK.  But the CCM for the AdobeRGB colour space is different to the CCM for the sRGB colour space (since AdobeRGB and sRGB have different colour primaries) and working with AdobeRGB will only display correctly in proper colour managed software.

  • timaras and primeshooter like this

#6 martz

martz

    Sputnik

  • -----
  • Posts: 45
  • Joined: 26 Aug 2020

Posted 23 April 2025 - 05:08 AM

Finding (or measuring) the CCM is a dark art, we hope DxO has the matrix. . . .

 

Tone reponse curve - help!

 

This is my weakest point and I could use some education. There is usually a step to (manually) apply the tone response curve, e.g. for sRGB. I understand this is what a normal imaging software (or OS) does to adjust the image stored pixels before sending them to the monitor for display. Why is this step necessary? Doesn't the software (Pixinsight, Siril, Photoshop) apply the (let's say sRGB) tone response curve to display the stretched data on the screen, throughout this whole process? Then, when exporting the image to e.g. JPG, aren't these values baked into to the image file (along with the sRGB profile used)?

Hello,

 

I'd downloaded siril v1.3.5 (a beta version of siril's future v1.4.0) a while ago and can't find the download link at the moment.  There, there is no need to manually apply the tone response curve.  Once done with your linear steps, you can just assign a color space, or you can convert a file into your desired color space.  The documentation goes through this.  As to why this is necessary, this wikipedia article has the background.  Regarding the "dark art", you may try profiling your camera/lens combination using DCamProf (free), or Lumariver (paid) if like me you don't use command line tools.  

 

Regards.     


  • timaras likes this

#7 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 23 April 2025 - 07:18 AM

It doesn't matter what you tell RawTherapee about the TRC…it applies the color space TRC to the data before it's recorded to the file so that what gets displayed by any other app is linear off the screen. So you have to decode the cctf in the AP app (or at least remember to turn on color management if it's got it). Also, the lingua franca between apps is cctf-encoded sRGB, which truncates cyans (even within Pointers Gamut, to include a blue Uno card) to the blue-green rail. The cyans of OIII-rich targets require negative values of red in sRGB to reproduce:

 

https://www.cloudyni...9#entry12793113

 

I would advise the use of AdobeRGB if you're going to pass data in 16-bit positive integer TIFF. But if the AP app doesn't support color management, you'll need to decode the cctf and apply the AdobeRGB to sRGB transform to get back to linear sRGB with negative reds to accurately represent OIII. For visualization, square-root preview (gamma = 2) is the closest to simulating a display chain gamma correction of 2.2.

 

Cheers,

 

BQ


Edited by BQ Octantis, 23 April 2025 - 08:28 AM.

  • timaras likes this

#8 timaras

timaras

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 23 April 2025 - 08:42 AM

OK, reality check.

 

There is no such thing as "natural" color in astrophotographic image data processing, regardless of what anyone ever tells you.

 

Therefore, just process the image until it meets your criteria of what looks good to you.

 

That's it.

I totally get this point. It's fine. Let me rephrase the goal of my study:

Is there an DSLR AP workflow that, when I input RAW files of terrestrial targets, and I omit the AP-specific steps (stacking, background subtraction, additional stretching), will produce output JPGs similar to what the camera generates? (But it will allow me to insert the AP-specific steps if I want).

 



#9 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 23 April 2025 - 12:39 PM

Is there an DSLR AP workflow that, when I input RAW files of terrestrial targets, and I omit the AP-specific steps (stacking, background subtraction, additional stretching), will produce output JPGs similar to what the camera generates?

 

Yes.


  • sharkmelley and timaras like this

#10 martz

martz

    Sputnik

  • -----
  • Posts: 45
  • Joined: 26 Aug 2020

Posted 23 April 2025 - 01:23 PM

Is there an DSLR AP workflow that, when I input RAW files of terrestrial targets, and I omit the AP-specific steps (stacking, background subtraction, additional stretching), will produce output JPGs similar to what the camera generates? (But it will allow me to insert the AP-specific steps if I want).

 

I'm a novice, and this two part tutorial (1, 2) has been helpful since it discusses the concepts of a workflow to develop a terrestrial raw file "by hand". You can use those ideas to practice with a terrestrial raw file using pixel math in Siril.  An AP workflow can be based on that.  Unfortunately this is a complex area, a true rabbit hole, so it just takes time.  Reading old posts from BQ and Mark is essential.  smile.gif   


  • timaras likes this

#11 timaras

timaras

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 23 April 2025 - 04:48 PM

I have used my Canon cameras for astro photography.  I have got good images that are accurate enough colour wise by doing the following:

Stacking and stretching in Astro Pixel Processor (using option to use the camera's white balance)

When stretching in APP using the remove light pollution option

In APP using the calibrate star colours option can be useful

 

Save as a 32 bit tiff using the ProPhotoRGB colourspace.

 

Open tiff in Photoshop and convert into my working space and then save (I use ProPhotoRGB, you would think that the export from APP would negate this step but for some reason its ProPhotoRGB triggers the different colourspace warning and if you don't convert colours things don't look right).

 

Next I use Star Xterminator to get a starless version and a version with just the stars.

 

Then I will perform noise reduction on the starless image.

 

I do further processing in Lightroom.

 

Finally I combine the starless image with the stars in Photoshop.

 

This results in pleasing images to my eyes.

 

If you are using flat files you will need the bias value for your cameras.  You can find this in the makernotes section of the image metadata.  It is the black point value.  In Siril you can use this value directly.  In APP you can't do this so you need to convert a raw file to fit and then using software with pixel math (I have done it in Siril) convert every pixel to the bias value.  Now you have a master bias for software that requires bias frames.

 

Personally I don't feel the need to get more complex with CCMs etc.  If you do want to go this route I would advise not going anywhere near the sRGB colour space.  It is small and will quite possibly limit colours.  Use a wider working colour space so no colours are limited.  Of course finding the conversion matrices to go from the camera raw data to something like ProPhotoRGB is not simple.  You can find the conversions in the libraw libraries.  It is a two step process: camera to XYZ, XYZ to working colour space.

@wongataa: I presume you do not feed APP/Siril any bias files, right? That's why you need the manual bias subtraction. Otherwise the calibration process of the apps will create a master bias and subtract it.

Also, since you work with APP, this will apply the CCM - so your color should be quite good (minus the stretching which is not as controlled in APP).



#12 FrankieT

FrankieT

    Vostok 1

  • -----
  • Posts: 172
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 23 April 2025 - 06:15 PM

Is there an DSLR AP workflow that, when I input RAW files of terrestrial targets, and I omit the AP-specific steps (stacking, background subtraction, additional stretching), will produce output JPGs similar to what the camera generates? (But it will allow me to insert the AP-specific steps if I want).
 

Yes there is, and it's possible with the latest development version of Siril. The color management section of the documentation describes how to apply a CCM and ICC colour profiles like sRGB. I suggest to try and reproduce a simple reference image, like a colour checker, that you obtain with Rawtherapee. The steps include:

 

1. Convert the raw dslr file to fits

2. Subtract the bias. For testing, simply subtract the synthetic bias value embedded in the exif data using pixel math or the usual calibration tab. In practice, you can also use a bias frame.

3. Debayer

4. Assign the ICC profile, eg sRGB with linear TRC and ensure that the preview mode is set to linear.

5. Apply white balance (use the color conversion matrix tool with the white balance RGB coefficients on the diagonal)

6. Apply the CCM using the color conversion matrix tool.

 

These steps should yield a comparable image obtained from Rawtherapee with the same white balance coefficients, camera standard input profile and a neutral processing profile. If you don't have a suitable reference image, Mark uploaded a Colour Checker image taken with a Canon 600D that you could use. The D50 CCM for the 600D can be obtained from DxOMark while the synthetic bias value is embedded in the raw exif data.

 

Once you can recreate a "simple" reference image, then you can apply the same process to your astro images.
 


  • timaras likes this

#13 wongataa

wongataa

    Explorer 1

  • -----
  • Posts: 75
  • Joined: 29 Jan 2023
  • Loc: UK

Posted 24 April 2025 - 05:09 AM

@wongataa: I presume you do not feed APP/Siril any bias files, right? That's why you need the manual bias subtraction. Otherwise the calibration process of the apps will create a master bias and subtract it.

Also, since you work with APP, this will apply the CCM - so your color should be quite good (minus the stretching which is not as controlled in APP).

I have created synthetic master bias frames for my cameras where every pixel is the bias value.  I did this because you can't just specify the bias value in APP.  You do need bias frames/values if you want to use flat frames.  If you don't then the flat correction doesn't work properly.



#14 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 24 April 2025 - 12:21 PM

Having been in this rabbit hole for three years now, the most significant problem I've found is that AP functions all expect their RGB input values to be positive real and on the interval [0,1]. AP does not do color matching, and the capture primaries are just indiscriminately dumped into the sRGB gamut. This guarantees that the data will be positive. Doing the color-matching coordinate system transformation from DSLR primaries to sRGB will guarantee that much of the linear data will be negative. Terrestrial raw processors simply truncate the negative values to zero before applying the TRC. This creates ugly zeros that we call noise…but it eliminates the square-root-of-negative-one problem you'll encounter with AP.



#15 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 24 April 2025 - 01:03 PM

I should also point out that the fixed pedestal value added by the camera as the "bias" is different than the per-pixel bias value from taking a bias (zero-length exposure) frame. The fixed pedestal value is contained in the EXIF header; it is what's used by terrestrial raw processors to derive the linear data. The pedestal is different for each of the 4 channels. Siril does not let you specify a fixed value per channel (just a single value for all four). After debayering, the two greens are averaged…so even with pixel math you're stuck with just a single value for both.



#16 wongataa

wongataa

    Explorer 1

  • -----
  • Posts: 75
  • Joined: 29 Jan 2023
  • Loc: UK

Posted 24 April 2025 - 02:29 PM

With one of my Canon cameras the black level (bias) in the EXIF is identical for all the channels.  With another of my Canon cameras the R & B values are identical and the G numbers are the same and differ from the R/B value by 1. So essentially the same.


  • FrankieT likes this

#17 primeshooter

primeshooter

    Viking 1

  • -----
  • Posts: 688
  • Joined: 19 Mar 2021

Posted 24 April 2025 - 03:10 PM

 

If you are using a stock camera for night-sky imaging then it is a worthy goal to process the data to give good colour reproduction i.e. to match the colours as far as possible to the way the human eye would see the scene if the eye's colour receptors had sufficient sensitivity.  After all, this is how images are processed for daytime terrestrial photography.  Such processing involves extra steps which are automatically applied when the camera produces JPG output or when the raw file is opened in Photoshop, Lightroom, RawTherapee or the camera manufacturer's own software but these steps are ignored by the typical astro-processing workflow.

 

Here are some brief answers to some of the questions you posed.

  • Photometric colour calibration and spectrophotometric colour calibration are sophisticated ways of applying the white balance by scaling the colour channels.  But they don't apply the relevant colour correction matrix.
  • The colour correction matrix is a transformation from the camera sensor's colour primaries to the colour primaries of the working colour space (typically sRGB).  It must be applied to linear data, before or after stacking.
  • In general, GeneralizedHyperbolicStretch is not colour preserving.  But the implementation is PixInsight allows you to set the Mode to "Colour" in the Colour Options panel which will scale the pixel's RGB channels by the same multiplier, thus preserving the pixel's colour.
  • The tone response curve of the colour space is an interesting problem.  The end goal is for the brightness of each pixel on the display to be proportional to the brightness of the corresponding point in the scene being imaged.  But the display chain for non-colour-managed images assumes that the image is sRGB, including the sRGB tone response curve (the variable gamma curve) and so the display chain "undoes" this transformation before displaying it on screen.  The net result is that linear data is not displayed correctly because the display chain is stretching the data, making it appear far too dark and contrasty.  The best way to avoid this is use a linear working profile or if the processing software is not colour-managed you can apply the TRC (tone response curve) to the data.  But if you apply a TRC that is variable gamma (like for sRGB) then subsequent colour preserving stretches will actually distort colour because brightening the pixel moves it to a part of the TRC where the gamma is different.  However a colour space such as AdobeRGB has a constant gamma TRC and so subsequent colour preserving stretches are OK.  But the CCM for the AdobeRGB colour space is different to the CCM for the sRGB colour space (since AdobeRGB and sRGB have different colour primaries) and working with AdobeRGB will only display correctly in proper colour managed software.

 

Voice of reason here. Yes, the first paragraph.


  • timaras likes this

#18 timaras

timaras

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 25 April 2025 - 02:55 AM

I should also point out that the fixed pedestal value added by the camera as the "bias" is different than the per-pixel bias value from taking a bias (zero-length exposure) frame. The fixed pedestal value is contained in the EXIF header; it is what's used by terrestrial raw processors to derive the linear data. The pedestal is different for each of the 4 channels. Siril does not let you specify a fixed value per channel (just a single value for all four). After debayering, the two greens are averaged…so even with pixel math you're stuck with just a single value for both.

@BQ, this is somewhat confusing to me. For example, let's say a 14bit camera (16384 max value) has 2048 as a pedestal value (contained in the EXIF). Isn't is true that the pixels in a RAW file will have a bias of 2048 (meaning: the darkest pixels will have a value around 2048, plus/minus noise/uncertainty), and the RAW converter will know to subtract 2048 from each RAW pixel value? Sure, for a single bias frame the values won't all be exactly 2048, but when stacking multiple bias frames they should converge around the pedestal value?



#19 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 25 April 2025 - 05:26 AM

@BQ, this is somewhat confusing to me. For example, let's say a 14bit camera (16384 max value) has 2048 as a pedestal value (contained in the EXIF). Isn't is true that the pixels in a RAW file will have a bias of 2048 (meaning: the darkest pixels will have a value around 2048, plus/minus noise/uncertainty), and the RAW converter will know to subtract 2048 from each RAW pixel value? Sure, for a single bias frame the values won't all be exactly 2048, but when stacking multiple bias frames they should converge around the pedestal value?

The terrestrial raw "converter" subtracts the pedestal value in the EXIF header. That was the basis of your question.

 

The bias calibration function in AP software subtracts the literal value in each bias pixel from the light or flat.

 

At a pixel level, the bias stack average is converged by definition because the camera adds the pedestal constant to the average of all the dark ("black") pixels—and the average of a stack of averages equals the average of the stack.

 

However, the min and max are not convergent—at least not in a stack of 50 bias frames out of the 600D:

 

med_gallery_273658_12412_5454.png

 

This is why in AP there are two schools of thought on using bias frames vice a "synthetic" bias.

 

Cheers,

 

BQ



#20 wongataa

wongataa

    Explorer 1

  • -----
  • Posts: 75
  • Joined: 29 Jan 2023
  • Loc: UK

Posted 25 April 2025 - 05:52 AM


This is why in AP there are two schools of thought on using bias frames vice a "synthetic" bias.

 

Cheers,

 

BQ

It is pretty easy to process an image stack with both bias methods (synthetic and stacked bias frames) and see if you can see any difference.  Personally I find the synthetic way works just fine and it saves me ever having to take bias frames again.  The way I see it it works for terrestrial photography so why should it be any different for astro photography.  Digital cameras work the same way whatever you take a photo of.
 

There is nothing wrong with either method as long as you are happy with the results.


  • FrankieT likes this

#21 FrankieT

FrankieT

    Vostok 1

  • -----
  • Posts: 172
  • Joined: 08 Jan 2019
  • Loc: Switzerland

Posted 25 April 2025 - 07:16 AM

@BQ, this is somewhat confusing to me. For example, let's say a 14bit camera (16384 max value) has 2048 as a pedestal value (contained in the EXIF). Isn't is true that the pixels in a RAW file will have a bias of 2048 (meaning: the darkest pixels will have a value around 2048, plus/minus noise/uncertainty), and the RAW converter will know to subtract 2048 from each RAW pixel value? Sure, for a single bias frame the values won't all be exactly 2048, but when stacking multiple bias frames they should converge around the pedestal value?

Perhaps it helps to clarify terminology. In astrophotography, a "pedestal" and offset/bias can have different meanings but are sometimes used interchangeably, which can be confusing. Typically, a pedestal refers to a value added deliberately during image processing or calibration to keep pixel values positive and prevent clipping while an "offset" is a constant value added to all pixels by the camera hardware. Since we are referring to the offset in this discussion, I'll avoid the term pedestal.

 

The camera offset, which is embedded in the exif data from a DSLR, represents the constant value added to all pixels. There is provision for an offset per colour channel in the exif data but they usually have the same magnitude or only differ by 1 ADU. So you are correct, pixels that are not exposed to light will have a bias value approximately equal to the camera offset; but, in practice, the true bias of each pixel will differ slightly from the camera offset due to small variations in the hardware/electronics. These differences are usually random-ish in the spatial domain so if you average all of the pixels in a single bias frame, then that value will converge to the camera offset. Nevertheless, these small bias variations can sometimes manifest as fixed pattern noise.

 

If your camera suffers from fixed pattern noise, then this can be mitigated by creating a master bias frame, which measures the bias value per pixel. Otherwise, you can simply subtract the camera offset embedded in the exif data.

 

The bias can be calibrated with Siril using various methods—a master bias frame, a single synthetic bias or even a synthetic bias per colour channel. In any case, the bias must be subtracted before debayering in the processing pipeline.



#22 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 25 April 2025 - 08:49 AM

Perhaps it helps to clarify terminology. In astrophotography, a "pedestal" and offset/bias can have different meanings but are sometimes used interchangeably, which can be confusing. Typically, a pedestal refers to a value added deliberately during image processing or calibration to keep pixel values positive and prevent clipping while an "offset" is a constant value added to all pixels by the camera hardware.

Thanks for that distinction, Frankie! Per your clarification, I think I use a pedestal for various transforms with Colour-Science for Python. Some of the functions don't like zeroes or negative values, so I truncate to 1/65535 instead of 0. That's not a bias…but is that a pedestal?

 

BQ



#23 timaras

timaras

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 08 Apr 2017
  • Loc: London, UK

Posted 25 April 2025 - 12:03 PM

 

 

If your camera suffers from fixed pattern noise, then this can be mitigated by creating a master bias frame, which measures the bias value per pixel. Otherwise, you can simply subtract the camera offset embedded in the exif data.

 

 

OK thanks (and BQ) for the detail. So I understand that subtracting a master bias is more precise (it will get rid of these nasty pixels that are away from the offset), but just subtracting a spatially uniform offset may be good enough.



#24 timelapser

timelapser

    Mariner 2

  • -----
  • Posts: 240
  • Joined: 21 Oct 2022

Posted 25 April 2025 - 06:56 PM

Is there an DSLR AP workflow that, when I input RAW files of terrestrial targets, and I omit the AP-specific steps (stacking, background subtraction, additional stretching), will produce output JPGs similar to what the camera generates? (But it will allow me to insert the AP-specific steps if I want).

It may be worth stressing, as far as reproducing terrestrial jpegs, that cameras will typically apply a tone curve in addition to the necessary display gamma curve.  This is mentioned in part 2 of the tutorial martz cites.  They do this because if there are clipped highlights (eg bright sky) then without the extra tone curve the slope of the response (linear input to final output) changes abruptly at the clip point (usually pix value 255).  So the extra tone curve smooths out the transition to clipped.  In addition it can reveal detail in otherwise clipped pixels, ie increase the overall (input) dynamic range captured in the final pic.  This of course is at the expense of loosing strict linearity of final displayed image to original scene intensity.
 

But different cameras will presumably apply different tone curves and individual cameras may have different options for that (eg "vivid", extra contrast settings).  Of course for astrophotography if you have no clipped pixels this isn't an issue and there's no need for an extra tone curve.


Edited by timelapser, 25 April 2025 - 06:58 PM.

  • timaras likes this

#25 BQ Octantis

BQ Octantis

    Voyager 1

  • *****
  • Posts: 10,412
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 25 April 2025 - 07:21 PM

In addition it can reveal detail in otherwise clipped pixels, ie increase the overall (input) dynamic range captured in the final pic.  This of course is at the expense of loosing strict linearity of final displayed image to original scene intensity.
 

But different cameras will presumably apply different tone curves and individual cameras may have different options for that (eg "vivid", extra contrast settings).  Of course for astrophotography if you have no clipped pixels this isn't an issue and there's no need for an extra tone curve.

Instead, in AP you get the dreaded purple star cores from the differing dynamic ranges between the channels. The Siril developers recommended simply desaturating the cores, but the Highlights reconstruction is a RawTherapee function I wish existed in Siril (or that RawTherapee could apply to a TIFF and not just a raw file).


  • Brain&Force likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics