Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

"True color" again

Astrophotography
  • Please log in to reply
439 replies to this topic

#1 loujost

loujost

    Mariner 2

  • -----
  • topic starter
  • Posts: 247
  • Joined: 02 Oct 2017

Posted 28 February 2023 - 09:29 PM

Many of us want to make photos that show realistic colors, not the false colors of the Hubble palette etc which are more like maps or abstract art. However, in most posts that mention this topic, there are people who insist that there is no such thing as "true" or "realistic" color. I can't understand this view. The situation is no different than for terrestrial objects, where there is broad agreement about their color. It seems to me that there is an obvious way to reach a consensus about the true color of an astronomical object. We could just look at it with a sufficiently large-aperture telescope, so that the image was bright enough to engage the observer's cone vision. Those are the "real" colors of the object.

 

There would still be a bit of ambiguity; we'd get slightly different colors if we were floating in space versus sitting under Earth's (or another planet's) dense atmosphere. That's fine, we can choose our "rendering intent". But once this is chosen, a consensus about colors would be reached, just as a consensus is reached for terrestrial objects. Consensus is especially easy to reach when the object in question emits its own light, as is the case with most astro objects.

 

So am I missing something important? "True" colors are the colors we would see with a sufficiently large telescope. Seems simple. Is this definition acceptable?

 

If so, then Bayer matrix sensors would be the best tool for obtaining realistic colors, since their RGB filters are designed to match our own RGB cone system.

 

It follows that astro RGB filters with sharp cutoffs and small overlaps will not reproduce accurate colors when there are multiple strong narrow emission bands, since those filters always present each narrow band as either R, G, or B, not distinguishing between celeste and indigo for example. They would work well for black-body radiators like stars, though.

 

Some people on the forums talk about red light "contaminating" the blue light in a Bayer-filtered  sensor, but this is how our eyes work, and we need a sensor which reacts this way in order to reproduce magentas. So it seems to me wrong to call a typical astro LRGB image "natural" or "realistic" (though it might be nearly natural if there are only stars and galaxies).

 

 


  • rainycityastro, Dean J., sharkmelley and 3 others like this

#2 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 28 February 2023 - 09:45 PM

If so, then Bayer matrix sensors would be the best tool for obtaining realistic colors, since their RGB filters are designed to match our own RGB cone system.

 

Stone the flamin' crows! Our cone system is BT.709? jawdrop.gif

 

(Hint: it's not…)

 

Heck, I'd trade my 200mm f/2.8L II USM for a Bayer matrix with primaries even close to BT.709!

 

https://www.dxomark....s#measuretabs-7

 

It wouldn't have taken me literally months to design a CCM to recover Planckian star colors from them on my ~sRGB LCD display.

 

What are the chromaticities of your Bayer primaries?


Edited by BQ Octantis, 28 February 2023 - 10:18 PM.

  • mikemarotta likes this

#3 JamesTX

JamesTX

    Viking 1

  • *****
  • Posts: 809
  • Joined: 27 Aug 2017
  • Loc: Texas

Posted 28 February 2023 - 10:40 PM

I can't understand this view. The situation is no different than for terrestrial objects, where there is broad agreement about their color.

Is there broad agreement?  Someone who is color blind isn't going to see the same "true colors" as someone who isn't color blind. This is why using the term "realistic" is a problem. 

 

After working with color correction, curves, and saturation on OSC data.. is it still "realistic"?

 

I image with both, mono and osc.  Rather than calling images "natural" or "realistic", I refer to them as either "broadband" or "narrowband".   


  • rainycityastro, n2068dd, mikemarotta and 1 other like this

#4 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 28 February 2023 - 11:17 PM

It's not that you can't color match, even with overlapping Bayer filter spectrums (like those in my venerable Canon 600D/T3i, which is designed to reproduce what the eye sees in reasonable lighting).

 

But the problem with making color accuracy claims is multi-faceted (even assuming "normal" color vision):

 

1. Standard AP workflows do not do color-matching. Color matching requires a CCM to properly transform the data from the color space of the filter primaries to the BT.709 primaries used for our sRGB displays. You're not going to do this by fiddling with the white balance of the random primaries you dumped into the BT.709 channels followed by an asinh/GHT transformation in an entirely non-color managed process.

 

2. Our BT.709 monitors can't reproduce the entire gamut of human vision. Nor can the sRGB format we use for our JPEGs/PNGs/TIFFs (which uses the same BT.709 primaries). Cyans in particular require negative values in BT.709 color space pretty quickly (hence the use of the Adobe RGB color space to preserve them for print, which can reproduce them). And our displays cannot reproduce a red, green, or blue pixel with a negative value.

 

3. Our eyes are not as sensitive to Ha as AP cameras. It is the LPF-2 filter in a daylight camera that clips the spectrum to the human vision sensitivity, not the Bayer array. The reds from Ha in AP images from cameras without the LPF-2 are severely over represented (by 4-5x).

 

4. Every bit of our processing enhances the brightness and contrast of our targets way beyond what we could "naturally" produce with the eye. Color perception changes with brightness, so manipulating the brightness and contrast affects the "accuracy" of the perceived color saturation.

 

5. Even if we were to color match perfectly (i.e., exact xy chromaticity values for every pixel), there is no "natural" scenario in which we could ever see such colors from the objects we shoot, because we did not match xyY. In all natural systems, the system etendue is conserved because optical power is conserved—as opposed its massive amplification in AP, which activates our cones. Even with a fast, passive optic (is this still "natural"?) with a massive exit pupil, the entrance pupil to our eye limits the flux (~Y) reaching the retina to only what you can see with the unaided eye (hence the 7x50 maximum ratio for binoculars for nighttime visual). So the light from the colorful DSOs we shoot can never be bright enough to activate our cones via "natural" means. The "natural" color of DSOs is whatever color your rods register as in your brain.

 

Hope that helps…

 

BQ


Edited by BQ Octantis, 28 February 2023 - 11:38 PM.

  • psandelle, n2068dd, Astrola72 and 1 other like this

#5 columbidae

columbidae

    Messenger

  • -----
  • Posts: 463
  • Joined: 08 Sep 2022
  • Loc: Texas

Posted 28 February 2023 - 11:21 PM

Is there broad agreement?  Someone who is color blind isn't going to see the same "true colors" as someone who isn't color blind. This is why using the term "realistic" is a problem. 

 

After working with color correction, curves, and saturation on OSC data.. is it still "realistic"?

 

I image with both, mono and osc.  Rather than calling images "natural" or "realistic", I refer to them as either "broadband" or "narrowband".   

Color blind people already see the world with different color perception - a "true color" astrophoto would be no different than their usual day-to-day perception of terrestrial "true color".  Provided the colored details aren't all in a range they're insensitive to, they can still appreciate them, if perhaps less than someone with average to exceptional color vision.

 

From what I understand, the usual reasons to add false color are the following:

 

1. To make it easier to see and understand scientific details (like the Hubble palette) - making certain emission spectra different colors to more clearly show what material is where and how hot, including spectra that aren't in the visual range.

2. To make the photo more artistically pleasing. (If you want to add colors in a color blind friendly palette, I wouldn't mind.)

 

Otherwise, a spectrograph or similar would be the way to determine the actual true color(s) of things.  Vision and perception is really complicated in comparison.



#6 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 7,740
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 01 March 2023 - 02:39 AM

It follows that astro RGB filters with sharp cutoffs and small overlaps will not reproduce accurate colors when there are multiple strong narrow emission bands, since those filters always present each narrow band as either R, G, or B, not distinguishing between celeste and indigo for example.

Exactly right.  The best example of this is to think about would happen if you took an image of a spectrum or rainbow with sharp cut-off filters.



#7 thommy

thommy

    Lift Off

  • -----
  • Posts: 17
  • Joined: 08 Jun 2021
  • Loc: Denmark, Suburban

Posted 01 March 2023 - 03:32 AM

Why is it important to you? If you and others (including myself although I mostly image in B/W these days) want to present images in as "true colors" as possible, you are free to do it. Those who like narrowband imaging know that the colors are not true to human eyes, but reveils details not clearly seen in broadband images. So why do we need to agree? 


  • DaveB, kathyastro, SoDaKAstroNut and 4 others like this

#8 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 01 March 2023 - 07:04 AM

Exactly right.  The best example of this is to think about would happen if you took an image of a spectrum or rainbow with sharp cut-off filters.

Hiya Mark! laugh.gif

 

I'm glad you chimed in! I was wondering this exact thing a few of days ago, after I tried to make a Planckian CCM from an OP's L-Enhance star data. It didn't go well:

 

What are the spectral bandwidth requirements for a Bayer CFA or a display to produce "adequate" coverage of the gamut produced by those primaries?

 

I assume the Bayer Maker's Handbook and the HDTV Manufacturers' Manual cover this, but I haven't come across those yet. I certainly haven't seen any spec…but QLED TV makers seem proud to publish the 99.93% BT.709 coverage of their primaries. I've seen no similar marketing ploy from camera makers, but the published spectra I see for Bayer CFAs generally have heaps of overlap (with weird tails in adjacent spectra off in the tails of the distribution in adjacent bands).

 

BQ


Edited by BQ Octantis, 01 March 2023 - 04:46 PM.

  • wiruna likes this

#9 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 7,740
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 01 March 2023 - 08:14 AM

QLED TV makers seem proud to publish the 99.93% BT.709 coverage of their primaries. I've seen no similar marketing ploy from camera makers

Coverage of a colour space is not a metric that makes sense for cameras since cameras faithfully record the full gamut of hues within all colour spaces.  The meaningful metric for a camera is the metamerism index.



#10 loujost

loujost

    Mariner 2

  • -----
  • topic starter
  • Posts: 247
  • Joined: 02 Oct 2017

Posted 01 March 2023 - 09:27 AM

Thanks for the responses.

 

Stone the flamin' crows! Our cone system is BT.709? jawdrop.gif

 

(Hint: it's not…)

 

My point is that our existing systems are widely agreed to be adequate to capture and reproduce our visual experience. Perfect? Of course not. But good enough for billions of users. A professionally color-managed and calibrated workflow is even better at this.

 

 

 

Is there broad agreement?  Someone who is color blind isn't going to see the same "true colors" as someone who isn't color blind. This is why using the term "realistic" is a problem. 

 

After working with color correction, curves, and saturation on OSC data.. is it still "realistic"?

 

I image with both, mono and osc.  Rather than calling images "natural" or "realistic", I refer to them as either "broadband" or "narrowband".   

Yes there is broad agreement. Billions of users show that photography works well at capturing realistic colors. Even the most common kinds of color-blind people (those who are missing one kind of cone) will agree. "Realistic" is not a problematic term, though I know care must be taken.

 

 

It's not that you can't color match, even with overlapping Bayer filter spectrums (like those in my venerable Canon 600D/T3i, which is designed to reproduce what the eye sees in reasonable lighting).

 

But the problem with making color accuracy claims is multi-faceted (even assuming "normal" color vision):

 

1. Standard AP workflows do not do color-matching. Color matching requires a CCM to properly transform the data from the color space of the filter primaries to the BT.709 primaries used for our sRGB displays. You're not going to do this by fiddling with the white balance of the random primaries you dumped into the BT.709 channels followed by an asinh/GHT transformation in an entirely non-color managed process.

 

2. Our BT.709 monitors can't reproduce the entire gamut of human vision. Nor can the sRGB format we use for our JPEGs/PNGs/TIFFs (which uses the same BT.709 primaries). Cyans in particular require negative values in BT.709 color space pretty quickly (hence the use of the Adobe RGB color space to preserve them for print, which can reproduce them). And our displays cannot reproduce a red, green, or blue pixel with a negative value.

 

3. Our eyes are not as sensitive to Ha as AP cameras. It is the LPF-2 filter in a daylight camera that clips the spectrum to the human vision sensitivity, not the Bayer array. The reds from Ha in AP images from cameras without the LPF-2 are severely over represented (by 4-5x).

 

4. Every bit of our processing enhances the brightness and contrast of our targets way beyond what we could "naturally" produce with the eye. Color perception changes with brightness, so manipulating the brightness and contrast affects the "accuracy" of the perceived color saturation.

 

5. Even if we were to color match perfectly (i.e., exact xy chromaticity values for every pixel), there is no "natural" scenario in which we could ever see such colors from the objects we shoot, because we did not match xyY. In all natural systems, the system etendue is conserved because optical power is conserved—as opposed its massive amplification in AP, which activates our cones. Even with a fast, passive optic (is this still "natural"?) with a massive exit pupil, the entrance pupil to our eye limits the flux (~Y) reaching the retina to only what you can see with the unaided eye (hence the 7x50 maximum ratio for binoculars for nighttime visual). So the light from the colorful DSOs we shoot can never be bright enough to activate our cones via "natural" means. The "natural" color of DSOs is whatever color your rods register as in your brain.

 

Hope that helps…

 

BQ

Thanks for this. #1 is not an argument but a statement that we are mostly not getting realistic colors. I agree.

 

#2: Problems with the output device are a separate issue, and gamuts are improving constantly. The better ones are already good enough even for demanding pro photographers/printers.

 

#3. Again this is not an argument but a statement that we are mostly not getting realistic colors. I agree. The issue of sensitivity does add some complications, and also gives us a legitimate degree of freedom in our representations of "natural" colors; we might have to decide whether we want to allow the eye to adjust while scanning different parts of the subject.

 

#4. That's right, of course, and gives us some slight ambiguity. But as brightness increases, I think that these perceived colors would be pretty stable, at least based on our experience with ordinary photography.

 

#5. I don't fully understand this point, and would like to hear more. This could be an important counter-argument against my definition of "natural" or "realistic" color.

 

Columbidae, I agree with everything you said. The goal of most astrophotography is to make a pretty photo or an informative one, not a "natural" one. Which is fine.

 

Thommy, we don't have to agree on what we want to do. What I would like to see is the recognition that there is a reasonable way to define "natural" or "realistic" colors, and this definition makes sense to the wider public and is worth pursuing by some of us. Whether that is of interest to you is your own business. But  the general non-technical public really does think that DSOs look as they are portrayed in most AP photos. Some astrophotographers add to the confusion by claiming that their LRGB photos are "natural". I wish more astrophotographers would present what a person would actually see through a (big enough) telescope. I think this is what most non-technical people want to know.

 

Mark, thanks for coming in. I've read many of your other posts on this subject and they are always eminently sensible and balanced.


Edited by loujost, 01 March 2023 - 11:13 AM.


#11 arrowspace90

arrowspace90

    Apollo

  • *****
  • Posts: 1,438
  • Joined: 15 May 2009
  • Loc: United States

Posted 01 March 2023 - 01:50 PM

I personally do not prefer the colors of the Hubble Palette and I think it is vastly overused.  Or I suppose I am just burned out on it.

 

I don't find it realistic myself, but all that means is, I will not typically be using that technique.  I won't be trying to tell you not to.


  • Andrew_L and Loopster like this

#12 jonnybravo0311

jonnybravo0311

    Fly Me to the Moon

  • *****
  • Moderators
  • Posts: 5,020
  • Joined: 05 Nov 2020
  • Loc: NJ, US

Posted 01 March 2023 - 02:09 PM

I personally do not prefer the colors of the Hubble Palette and I think it is vastly overused.  Or I suppose I am just burned out on it.

 

I don't find it realistic myself, but all that means is, I will not typically be using that technique.  I won't be trying to tell you not to.

There's nothing realistic about the Hubble palette. It simply defines a strategy of mapping longest to shortest wavelength light you captured to the R, G and B channels respectively. So, if you happened to capture S2, O3 and Hb for example, then the Hubble pallet would put S2 into R, O3 into G and Hb into B. That's probably more "real" than when you capture S2, Ha and O3 because both S2 and Ha are wavelengths that correspond to red. Or, perhaps you've decided on capturing sulphur, hydrogen and nitrogen. All 3 of those are red, but Hubble palette would put S into R, N into G and H into B. Nothing at all "real" about it :).


  • rjacks likes this

#13 loujost

loujost

    Mariner 2

  • -----
  • topic starter
  • Posts: 247
  • Joined: 02 Oct 2017

Posted 01 March 2023 - 02:12 PM

I personally do not prefer the colors of the Hubble Palette and I think it is vastly overused.  Or I suppose I am just burned out on it.

 

I don't find it realistic myself, but all that means is, I will not typically be using that technique.  I won't be trying to tell you not to.

Nobody on any of these threads about "realistic" color is trying to tell other people what to do.  My goal here was to point out that the concept of "realistic" color is not as ambiguous as some people suggest, and most AP photos (even LRGB photos) do not have realistic colors.

 

I do hope more people would work on producing realistic images for public consumption....but it is also fun and interesting to produce false-color images.

 

Truth in labeling matters, though. I think we should not label LRGB photos of DSOs as "natural color" or similar. I suspect most astrophotographers who do that are unaware of the limitations of standard sharp-cutoff filters, which cannot  accurately reproduce the true color of a DSO that contains multiple emission lines.


Edited by loujost, 01 March 2023 - 02:13 PM.

  • mikemarotta likes this

#14 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 01 March 2023 - 06:21 PM

Thanks for this. #1 is not an argument but a statement that we are mostly not getting realistic colors. I agree.

 

#2: Problems with the output device are a separate issue, and gamuts are improving constantly. The better ones are already good enough even for demanding pro photographers/printers.

 

#3. Again this is not an argument but a statement that we are mostly not getting realistic colors. I agree. The issue of sensitivity does add some complications, and also gives us a legitimate degree of freedom in our representations of "natural" colors; we might have to decide whether we want to allow the eye to adjust while scanning different parts of the subject.

 

#4. That's right, of course, and gives us some slight ambiguity. But as brightness increases, I think that these perceived colors would be pretty stable, at least based on our experience with ordinary photography.

 

#5. I don't fully understand this point, and would like to hear more. This could be an important counter-argument against my definition of "natural" or "realistic" color.

 

No worries at all!
 

I've been on a deep spiritual journey that started with trying to get any color in Siril, passed through metamerism and CCM color matching, and ended with "what exactly is color?"

 

Items 1-4 are simply what I see as the more serious technical shortcomings of capture and reproduction of color in a technically accurate method. I omitted the choice of white point (CCMs are very often D50, while sRGB and our BT.709 displays are squarely D65 to maximize the use of the BT.709 primaries) because it is probably of lesser importance, but that also affects the warmth/coolness of the scene—and the transition star temperature from yellow to blue. You can certainly quibble over or minimize the importance of all of these matters. But without doing any color matching at all, the color accuracy argument just completely falls apart.

 

Probably of equal technical importance is color manipulation. Fiddling with the primaries directly to change brightness, contrast, saturation, and color balance is also rife with pitfalls because of the psychophysical aspects of color perception. I developed a means to implement more perceptually accurate chroma enhancement with an OKLab workflow in pixel math, but it's far from convenient—and there are valid technical arguments to the demerits of even a perceptual color space (and even the need for a 21st Century overhaul of color manipulation). The best extant workable solution I found was RawTherapee's L*a*b* Adjustment color controls panel, which implements the necessary Munsell color corrections to the shortcomings of the L*a*b* color model. Again, you can quibble over or ignore this point all you want. But don't blame me when someone points out that you ignored the Helmholtz–Kohlrausch effect.

 

Item 5 is much more fundamental and entirely philosophical. Color perception is a psychophysical process that requires sufficient photon flux to stimulate the cones. But the photon flux off deep space objects is far too low to do so through non-electronic (i.e., natural) means—(because of the étendue and pupil limitations I already discussed). Astrophotography non-naturally amplifies and reproduces the flux by several orders of magnitude to shift the DSOs from entirely scoptic (produced by rhodopsin phototransduction in colorless rods) to fully photoptic (produced by photopsin phototransduction in the tristimulus cones). These are two very different visual processes. If you can label electronically transmuting the one to the other "natural", then perhaps we should be quibbling over the definition of that word.

 

BQ

 


  • psandelle and mikemarotta like this

#15 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 01 March 2023 - 06:40 PM

Coverage of a colour space is not a metric that makes sense for cameras since cameras faithfully record the full gamut of hues within all colour spaces.  The meaningful metric for a camera is the metamerism index.

 

Thanks for that, Mark.

 

I've only seen that value on the DXOMark pages for various DSLRs, but I hadn't researched it further. As it turns out, the DXOMark site has a great description of the concept (technically, the Sensitivity Metamerism Index):

 

"The sensitivity metamerism index (SMI) is defined in the ISO standard 17321 and describes the ability of a camera to reproduce accurate colors. Digital processing permits changing color rendering at will, but whether the camera can or cannot exactly and accurately reproduce the scene colors is intrinsic to the sensor response and independent of the raw converter.

 

The underlying physics is that a sensor can distinguish exactly the same colors as the average human eye, if and only if the spectral responses of the sensor can be obtained by a linear combination of the eye cone responses. These conditions are called Luther-Ives conditions, and in practice, these never occur. There are objects that a sensor sees as having certain colors, while the eye sees the same objects differently, and the reverse is also true.

 

SMI is an index quantifying this property, and is represented by a number lower than 100 (negative values are possible). A value equal to 100 is perfect color accuracy, and is only attained when Luther-Ives conditions hold (which, as previously stated, never happens in practice). A value of 50 is the difference in color between a daylight illuminant and an illuminant generated by fluorescent tubes, which is considered a moderate error.

More precisely, SMI is defined as

 

image0001.gif ,

 

where image0002.gif is the average CIELAB error observed on a set of various colors. In our experiments, we used the 18 colored patches of a Gretag McBeth color checker, as ISO 17321 recommends. The SMI varies depending on the illuminant.

 

In practice, the SMI for DSLRs ranges between 75 and 85, and is not very discriminating. It is different for low-end cameras (such as camera phones), which typically have a SMI of about 40. For this reason, we give this measurement as an indication but do not integrate it in DxO Mark." [Source, emphasis mine]

 

I have never seen this concept described—much less measured—in any AP context. But I could have easily missed it…

 

Cheers,

 

BQ


Edited by BQ Octantis, 01 March 2023 - 08:29 PM.

  • psandelle and jdupton like this

#16 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 7,740
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 02 March 2023 - 01:53 AM

Thanks for that, Mark.

For completeness, I should point out that if the colour filters have gaps between transmission curves then there will be some parts of the colour gamut they'll be unable to record. This caused an amusing effect for those doing LRGB imaging of Comet Neowise's unusual sodium tail using filter sets that deliberately had a gap at the sodium wavelength.  Only the L filter would record the tail so it was impossible to assign any colour to that part of the tail.


  • Foc likes this

#17 dciobota

dciobota

    Soyuz

  • *****
  • Posts: 3,735
  • Joined: 07 Aug 2007
  • Loc: No longer on this site in protest of poor site moderation

Posted 02 March 2023 - 09:48 AM

I feel like a Cindy Lauper song coming on. 😏

If you really want to see "true" color in astroimaging take pics with an unmodified dslr in daylight wb mode. That will be about as close as you can get to terrestrial image colors. But not even then, because depending on lens characteristics, daylight sky conditions in terrestrial imaging, or skyglow/lol in the night sky, etc the color balance will vary. Therefore color balance requires changes in post processing.

Out of the camera, Astro images taken with an unmodified dslr will lack a lot of color in the Ha spectrum (deep red) unless again you enhance it in post processing.

So while you can do it, will any astronomer want to? Seems to not be the case. Just as in landscape photography, we want to showcase our subjects in some way, so enhancing colors in different spectral bands is a goal most strive to do here. It's no different than terrestrial photography, although perhaps on a different scale.

Edited by dciobota, 02 March 2023 - 09:52 AM.

  • licho52 likes this

#18 loujost

loujost

    Mariner 2

  • -----
  • topic starter
  • Posts: 247
  • Joined: 02 Oct 2017

Posted 02 March 2023 - 02:30 PM


Item 5 is much more fundamental and entirely philosophical. Color perception is a psychophysical process that requires sufficient photon flux to stimulate the cones. But the photon flux off deep space objects is far too low to do so through non-electronic (i.e., natural) means—(because of the étendue and pupil limitations I already discussed). Astrophotography non-naturally amplifies and reproduces the flux by several orders of magnitude to shift the DSOs from entirely scoptic (produced by rhodopsin phototransduction in colorless rods) to fully photoptic (produced by photopsin phototransduction in the tristimulus cones). These are two very different visual processes. If you can label electronically transmuting the one to the other "natural", then perhaps we should be quibbling over the definition of that word.

 

BQ

I understood the complications of rod versus cone vision; that's why I said "We could just look at it with a sufficiently large-aperture telescope, so that the image was bright enough to engage the observer's cone vision." What I didn't understand was why that is problematic.It's the etendue and pupil limitations that I don't understand. Could you explain why a "sufficiently large telescope" is not possible?



#19 loujost

loujost

    Mariner 2

  • -----
  • topic starter
  • Posts: 247
  • Joined: 02 Oct 2017

Posted 02 March 2023 - 02:37 PM

 Just as in landscape photography, we want to showcase our subjects in some way, so enhancing colors in different spectral bands is a goal most strive to do here. It's no different than terrestrial photography, although perhaps on a different scale.

Apart from certain genres of infrared landscape photography, landscape photography does not dramatically change perceived colors. Adjustments in white balance and saturation are usually made to move the photograph closer to what the brain "saw". People don't usually decide to change green leaves to blue, for example. They can if they want to, but this is widely recognized as unnatural.



#20 sharkmelley

sharkmelley

    Cosmos

  • *****
  • Posts: 7,740
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 02 March 2023 - 04:14 PM

It's the etendue and pupil limitations that I don't understand. Could you explain why a "sufficiently large telescope" is not possible?

A "sufficiently large telescope" can never make a diffuse object appear brighter than it actually is.  Otherwise with a sufficiently large telescope, moonlight would be able to burn out your retina or light a fire:

https://what-if.xkcd.com/145/

 

Mark


  • sbharrat likes this

#21 dciobota

dciobota

    Soyuz

  • *****
  • Posts: 3,735
  • Joined: 07 Aug 2007
  • Loc: No longer on this site in protest of poor site moderation

Posted 02 March 2023 - 04:48 PM

Apart from certain genres of infrared landscape photography, landscape photography does not dramatically change perceived colors. Adjustments in white balance and saturation are usually made to move the photograph closer to what the brain "saw". People don't usually decide to change green leaves to blue, for example. They can if they want to, but this is widely recognized as unnatural.

Actually they do quite a bit, at least as far as I've seen.  Even if you view photoshop tutorials one of the key things is color balance.  Take for example a shot taken in overcast skies and one taken in the late evening sun.  Both change the color cast of the scene.  Grass and leaves get different hues and so forth.  Quite a few times artists will play with white balance to achieve subtle effects, skin tones, etc.  A subject may have the same material, but given the light, the color will indeed change.

 

Astronomers are more limited in that lighting is essentially the same.  But they do take the same kinds of liberties to accentuate features.  Dust can be brown, or reddish vs the usual grayish tint.  As I've mentioned before, if you take images with a dslr, the Ha areas will be muted and sometimes overwhelmed by other features like O2 (blue).  Perfect example is M42. So which is the "true" color?  It's been debated to death and as pointed out by many here also there is no agreed upon "standard" here or in terrestrial photography.  Self proclaimed "color police" pop up from time to time in both fields, but they're usually ignored.

 

Now, scientific data is another thing.  But we're not talking about that here or terrestrial photography.

 

My advice, enjoy producing and viewing what you like.  In both fields, people often show a preference, which is why art is so varied.  Accept and select, my motto.



#22 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 02 March 2023 - 05:08 PM

I understood the complications of rod versus cone vision; that's why I said "We could just look at it with a sufficiently large-aperture telescope, so that the image was bright enough to engage the observer's cone vision." What I didn't understand was why that is problematic.It's the etendue and pupil limitations that I don't understand. Could you explain why a "sufficiently large telescope" is not possible?

 

Conservation of étendue yields Brightness Theorem:

 

"[N]o optical system can increase the brightness of the light emitted from a source to a higher value than the brightness of the surface of that source (where "brightness" is defined as the optical power emitted per unit solid angle per unit emitting or receiving area)." [Source]

 

The problem with the "sufficiently-sized aperture" is cramming the light column captured by the aperture into the eyeball through the pupil to reach the retina.

 

Per Sacek, the light cone from any objective faster than f/~250 is too large to fit into the entrance pupil of the eye. And that's not very fast.

 

So to get more of the light cone into the eye, we use an eyepiece that collects and crams all the flux from the aperture into an exit pupil beam from the eyepiece.

 

The optical configuration between the eyepiece and the eyeball is then afocal, where the eye forms an image from the beam emanating from the eyepiece.

 

But the faster the system output (by increasing the EP focal length), the larger the exit pupil (diameter of the beam) from the eyepiece. Per the Brightness Theroem, flux from the objective is spread across the exit pupil in a way that conserves étendue.

 

Once the exit pupil diameter from the eyepiece reaches the entrance pupil of the eye, it reaches the "natural" native speed of the dark-adapted eye, with a maximum pupil diameter of 7mm. With a native focal length of 24mm, the eye has a maximum focal ratio of 24/7 or ~f/~3.4

 

If you increase the optical system speed from there (by further increasing the EP focal length), the diameter of the exit pupil from the eyepiece exceeds the entrance pupil diameter of the eye. So any additional speed falls on the the iris and does not reach the retina.

 

As before, that is why binoculars for AP have exit pupils between 5-7mm: anything larger yields flux that is wasted on the iris and does not reach the retina.

 

And the Brightness Theorem says that at best (i.e, with matched entrance and exit pupils), the brightness of the target on the retina with or without the optic is identical.

 

So there is no passive (i.e, through optics alone) night sky brightness "amplification" that can reach the retina.

 

Hope that helps…

 

BQ


Edited by BQ Octantis, 02 March 2023 - 07:05 PM.

  • CharLakeAstro likes this

#23 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,649
  • Joined: 12 Oct 2007
  • Loc: Melbourne, Australia

Posted 02 March 2023 - 05:15 PM

I understood the complications of rod versus cone vision; that's why I said "We could just look at it with a sufficiently large-aperture telescope, so that the image was bright enough to engage the observer's cone vision." What I didn't understand was why that is problematic.It's the etendue and pupil limitations that I don't understand. Could you explain why a "sufficiently large telescope" is not possible?

If you walk toward a white wall - does it get brighter?  If you take a rocket to Jupiter - will it get brighter as you approach?  No - it will just appear larger in the field of view.

 

A telescope does the same thing - it just brings you closer.  It can definitely make an extended object fainter than it would be close up - but it can't make it brighter.  The brightness on the retina is set to a maximum by the fastest f/ratio of the eyeball itself - which is around f/3.  Putting big lenses in front won't make that final optical system (your eyeball) any faster.  It's at best f/3 or so.  And all your retina is aware of is that final cone of light coming at it - which can only get so wide, limited by the eye lens and pupil.  If you could do surgery and install a much larger lens there you could indeed make things brighter.   But there is an ouch factor involved.

 

As for "true" colors and so forth - I agree that having the pixel filters mimic the eye as much as possible is a good thing in that regard, so OSC filters are "better" in that sense.  But mainly I avoid the word "true" here and instead focus on whether the colors are calibrated in some meaningful and interpretable way - given the filters in use.  If the processing is somewhat arbitrary then the nature of the filters doesn't help much.

 

Frank



#24 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 02 March 2023 - 05:39 PM

For completeness, I should point out that if the colour filters have gaps between transmission curves then there will be some parts of the colour gamut they'll be unable to record. This caused an amusing effect for those doing LRGB imaging of Comet Neowise's unusual sodium tail using filter sets that deliberately had a gap at the sodium wavelength.  Only the L filter would record the tail so it was impossible to assign any colour to that part of the tail.

The gaps aren't at all surprising—the [0,0,0] RGB points would just be grayscale. But I was a little surprised I couldn't get a Planckian CCM from an L-Enhance filter on an OSC.

 

I think this discussion is generating a very nice set of required conditions for making any sort of claims to color accuracy. I assume "color" means "chromaticity":

 

  • What is the metamerism index of the sensor (or set of filters)? Were the Luther-Ives conditions met?
  • How was the CCM generated? Was a Macbeth chart used? Under what lighting conditions?
  • How were the chromaticities manipulated? Was a perceptual color model used for adjusting saturation? Were Munsell tone corrections used?
  • What was the white point of the image color space? The night sky is neither D65 nor D50.
  • What was the expected white point of the monitor?

 

Obviously, there can never be any claim made for lightness unless we strictly used the gamma of the monitor. Even then, I think that would only be relevant to contrast…

 

BQ


Edited by BQ Octantis, 02 March 2023 - 07:08 PM.


#25 BQ Octantis

BQ Octantis

    Cosmos

  • *****
  • Posts: 9,447
  • Joined: 29 Apr 2017
  • Loc: Nova, USA

Posted 02 March 2023 - 05:50 PM

Apart from certain genres of infrared landscape photography, landscape photography does not dramatically change perceived colors. Adjustments in white balance and saturation are usually made to move the photograph closer to what the brain "saw". People don't usually decide to change green leaves to blue, for example. They can if they want to, but this is widely recognized as unnatural.

The scene content very much changes perceived colors. That is the basis for CIECAM02 as an improvement on CIELab. But the scene conditions are not defined for AP.

 

BQ




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: Astrophotography



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics