Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

The Hubble Palette--true color of DSO's?

  • Please log in to reply
47 replies to this topic

#1 dmilner

dmilner

    Explorer 1

  • -----
  • topic starter
  • Posts: 59
  • Joined: 22 Mar 2016
  • Loc: Searcy, Arkansas

Posted 21 November 2019 - 01:16 AM

Will someone explain to me why the Hubble Palette is the most used in astrophotography.  I get how it differentiates the narrowband wavelengths by assigning a color to them, but is the resulting color the true color of the object?  For example, the Horsehead nebula.  You can go to Astrobin and even search Google and find many different colorized images.  You can find very contrasting images that, while obviously the Horsehead, are totally different in color scheme.  The same goes for practically every DSO that people have taken a picture of.  Is all narrowband astrophotography subjective?  Does monochrome imaging with color filters provide true colorization of the object?  Do OSC cameras produce true color of the object?  In other words to kind of sum up my questions, what equipment and technique would I have to use to produce a true image of what a particular DSO would look like IF it were bright enough and close enough to be seen by the cones in my eyes without optical aid?  Or is there not a true color and the production of the image is totally subjective and, like Beauty, depends on the beholder?  Just curious...

 

Dennis


Edited by dmilner, 21 November 2019 - 01:19 AM.

  • retroman2 likes this

#2 Tapio

Tapio

    Vanguard

  • -----
  • Posts: 2222
  • Joined: 24 Sep 2006
  • Loc: Tampere, Finland

Posted 21 November 2019 - 01:21 AM

In short:
https://www.gxccd.co...id=453&lang=409

Longer version:

https://starizona.co...-color-imaging/


  • jdupton, sharkmelley, 17.5Dob and 1 other like this

#3 sharkmelley

sharkmelley

    Mercury-Atlas

  • *****
  • Posts: 2510
  • Joined: 19 Feb 2013

Posted 21 November 2019 - 02:47 AM

In general OSC and DSLR cameras will reproduce colours more accurately than the sharp cut-off RGB external filters used on a mono camera.  This is because the RGB filters in the Bayer matrix of a OSC and DSLR are designed to reproduce the colours the human eye sees.  Having said that, there are definite differences between manufacturers.

 

Consider the OIII line in the Trapezium for instance:

https://clarkvision....ium.true.color/

Recently produced Canon cameras correctly reproduce this as teal but Sony sensors (also used in Nikons) tend to make it quite blue.  It is because DSLR sensor colour responses are more tuned to commonly found wide spectrum colours such as human skin than the single emission lines found in astronomy.

 

Mark


Edited by sharkmelley, 21 November 2019 - 02:59 AM.

  • jdupton, bmhjr, retroman2 and 1 other like this

#4 RJF-Astro

RJF-Astro

    Vostok 1

  • -----
  • Posts: 159
  • Joined: 13 Aug 2018
  • Loc: Zeist, Netherlands

Posted 21 November 2019 - 04:14 AM

Another longread is this ebook, in the chapter 'The Hubble photographs as aesthetic objects'. It is a more philosophical approach to this question, but imo the read is well worth the effort.


Edited by RJF-Astro, 21 November 2019 - 04:14 AM.

  • retroman2 likes this

#5 Alex McConahay

Alex McConahay

    Cosmos

  • *****
  • Posts: 8329
  • Joined: 11 Aug 2008
  • Loc: Moreno Valley, CA

Posted 21 November 2019 - 09:07 AM

>>>>>>> IF it were bright enough and close enough to be seen by the cones in my eyes without optical aid?

 

If you were close enough, it would be pretty much invisible. Look to the horizon on a typical day. You see things in the distance that are a bit hazy, but still there. You have a haze, but it has no real color, just perhaps a change in contrast. Now look straight up. You see none (or very little) of the haze, certainly no color in it. Go up a nearby mountain, and you may look down on the haze and see it as a generally grayish area. But other mountain peaks around you are in the clear (above the haze). Instead of a simple water vapor haze, make that smog, and you can even see a color in the haze. Not from inside the haze maybe, but from above.

 

Same thing with nebulosity. Were it not far away, and concentrated in an area, you would not see it. And the eye is not designed to bring out the color at all when visual observing. 

 

Now, if you analyze the constituents of the nebulosity (the elements and perhaps molecules) and the various wavelengths they are giving off you would have a characteristic color. The Hydrogen would be emitting a blue and mostly red. The Oxygen would be a teal, etc. Again, not enough to see. 

 

In practice, people making pretty pictures are free to give those emissions whatever color (and intensity) they feel makes the picture prettier. So, yes, that is arbitrary. However, most of us try to keep what was emitted as red as red most of the time. 

 

The Hubble team, however, assigns colors as necessary to make their points about composition of the target, and about aesthetics. 

 

One problem the hobbyist has is that S2 and Ha both emit in the red. One of them must be a different color if we want to distinguish things that contain one or the other more. 

 

Alex


  • sayitfast likes this

#6 SilverLitz

SilverLitz

    Mariner 2

  • -----
  • Posts: 294
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 21 November 2019 - 09:07 AM

I am new to this, but I think the reason for the significant use of SHO pallet is that it allows the greatest range of outcomes. 

 

With Ha generally being the strongest signal, SHO puts Ha in the middle of the color range and with an "S-Curve" adjustment in Hue Curve can spread this strong signal across the middle of the Hue range.  This can greatly increase color diversity and give subtle continuity of colors.



#7 WebFoot

WebFoot

    Surveyor 1

  • *****
  • Posts: 1581
  • Joined: 02 Jun 2005
  • Loc: Redmond, WA, USA

Posted 21 November 2019 - 11:45 AM

As one who produces both true-color versions of emission nebulae, and false-color (Hubble palette), I can say that I do false-color for a variety of reasons:

1. If there are significant oxygen emissions, I love the deep blue they produce in a SHO image.

2. Sometimes it's fun to compare what I get with my little ground-based scope with what Hubble got, like with the M16 (Pillars of Creation).

3. It is interesting, to me, to see the various emissions; as someone else pointed out, SII, like Ha, is red, and mapping them to different colors graphically shows them separately.

4. It stretches my processing skills (ha; see what I did there?  I crack me up sometimes) to process a data set more than one way.

5. It's interesting, to me, to compare the various ways of processing the data, and the final results.

For instance, this is my most recent image (NGC7380/Wizard Nebula), which I have presented in five different ways:  (i) NB data folded into BB data, for a "true-color" image with vivid colors and detail; (ii) Hubble palette (I even have two different renditions of those data); (iii) pure Ha (which I think is always beautiful on an emission nebula (and also grayscale OIII and SII images, to compare the emissions directly--although with different stretches); (iv) HOO, the so-called bicolor, mapping Ha to the red channel and OIII to both green and blue channel, for something like a true-color image, since OIII is a blue-green emission, and (v) an LRGB image, without benefit of NB data, as a comparison to the first image.

http://www.de-regt.c...GC7380.RCOS.htm

 

I do this for fun and my own enjoyment.  As do most imagers, I'm sure.  I do what pleases me, and I really, really like all five versions.  If this sort of work (and it takes a lot of work to do this, both in gathering data and in processing it) appeals to you, go for it; if it does not appeal to you, don't.

 

Mark


Edited by WebFoot, 21 November 2019 - 11:48 AM.

  • Swordfishy likes this

#8 dmilner

dmilner

    Explorer 1

  • -----
  • topic starter
  • Posts: 59
  • Joined: 22 Mar 2016
  • Loc: Searcy, Arkansas

Posted 21 November 2019 - 12:19 PM

Thanks, everyone, for your input into this discussion whether factual or philosophical.  And thanks for the links to other material talking about this concept...I look forward to looking at and reading them.  So by and large it appears from what was said that the actual final images are mostly subjective in trying to capture the true nature of the object as close as possible while at the same time producing an aesthetically pleasing reproduction.  So if that is true, does that mean that learning post processing is really more important in DSO astrophotograhy than say using a particular piece of equipment or even the technique you use to obtain the image (for example, whether the narrowband images are all the same length of time per channel or different lengths of time)? Up to now I've come to believe that the emphasis of getting rid of noise in our images is practically the only reason for post processing and combining [stacking] those images equally to produce the most realistic image possible is how the process should be done.  But the above discussion (and thanks, Mark, for you fascinating comparisons of the different images) makes me think that utilizing different filtering techniques as well as different ways of combining those filtered images may produce images that are equally aesthetically pleasing as well as providing information about the object that you wouldn't necessarily know from looking at a "realistic" picture of the object.

 

Dennis



#9 WebFoot

WebFoot

    Surveyor 1

  • *****
  • Posts: 1581
  • Joined: 02 Jun 2005
  • Loc: Redmond, WA, USA

Posted 21 November 2019 - 01:47 PM

Dennis, in my opinion, processing is, by a significant margin, the most important part of the entire process.  While the quality of the equipment and the skies under which one is imaging are very important, as is the skill with which one gathers data (and the amount of data one may choose to gather; I have 45 hours of data in that Wizard Nebula photo to which I linked earlier, and that doesn't include the frames I rejected for various reasons), optimal processing is difficult, and makes a huge difference in the final image.

 

Mark


Edited by WebFoot, 21 November 2019 - 02:48 PM.

  • George Simon and kathyastro like this

#10 kathyastro

kathyastro

    Viking 1

  • *****
  • Posts: 973
  • Joined: 23 Dec 2016
  • Loc: Nova Scotia

Posted 21 November 2019 - 02:37 PM

So if that is true, does that mean that learning post processing is really more important in DSO astrophotograhy than say using a particular piece of equipment or even the technique you use to obtain the image (for example, whether the narrowband images are all the same length of time per channel or different lengths of time)?

Yes, in my opinion, that is true.  Collecting clean data is just the beginning of the process.  Turning it into an image that is esthetically pleasing, or that reveals certain physical attributes of the object, or that fulfills whatever other purpose you have in mind, is by far the most important part of the process.  Collecting data is technical.  Processing is art.


  • dmdouglass, George Simon and SnowWolf like this

#11 dmdouglass

dmdouglass

    Surveyor 1

  • *****
  • Posts: 1792
  • Joined: 23 Dec 2007
  • Loc: Tempe, AZ

Posted 21 November 2019 - 03:43 PM

Yes, in my opinion, that is true.  Collecting clean data is just the beginning of the process.  Turning it into an image that is esthetically pleasing, or that reveals certain physical attributes of the object, or that fulfills whatever other purpose you have in mind, is by far the most important part of the process.  Collecting data is technical.  Processing is art.

And with the "art" comes "artistic license"....

which is why i love my Mono images, and stay with them. 

More... "true to life" with what we can (or might) see in the eyepiece.

Granted... there is some color out there that can be seen..  and it is beautiful..  but i for one will stay Mono.


  • Galaxyhunter and kathyastro like this

#12 JamesTX

JamesTX

    Vostok 1

  • -----
  • Posts: 152
  • Joined: 27 Aug 2017
  • Loc: Texas

Posted 21 November 2019 - 03:58 PM

I have found this hobby to be a nice blend of technology, science and art. 


  • sharkmelley likes this

#13 AhBok

AhBok

    Vanguard

  • *****
  • Posts: 2385
  • Joined: 02 Dec 2010
  • Loc: Lakeland, TN

Posted 21 November 2019 - 04:14 PM

I’m a bit old fashioned so prefer more realistic colors. I can appreciate the brilliant hues narrowband imaging gives along with the advantage of being able to image under the moon, but . . . I much prefer my star fields to be mostly red with some blue, orange and white stars. I pretty much don’t care to see green stars or bright green anything. I like that some consider “colorizing” nebulae an art, but my preferences tend more towards the traditional. I know that light pollution is a modern reality, but my favorite images look more like what you would see with an unfiltered OSC under very dark skies.


Edited by AhBok, 21 November 2019 - 04:16 PM.


#14 brettkoz

brettkoz

    Vostok 1

  • -----
  • Posts: 131
  • Joined: 15 Dec 2015
  • Loc: Hesperia, Michigan

Posted 21 November 2019 - 05:53 PM

All narrowband imaging is false color, when you really think of it. Even the natural palate embellishes the color, since we're gathering light in a very small portion of the visible spectrum and representing them using the entirety of it.


  • Stelios likes this

#15 WebFoot

WebFoot

    Surveyor 1

  • *****
  • Posts: 1581
  • Joined: 02 Jun 2005
  • Loc: Redmond, WA, USA

Posted 21 November 2019 - 06:24 PM

All narrowband imaging is false color, when you really think of it. Even the natural palate embellishes the color, since we're gathering light in a very small portion of the visible spectrum and representing them using the entirety of it.

All color imaging of deep sky diffuse objects is "false color," in the sense that we have increased the actual color intensity by orders of magnitude.  When I call an image "true color," I mean that it gives a good approximation of the actual colors of the object, not that it gives a good approximation of what that object would look like if we were just a few light years away from it.  A reasonably-processed image that includes Ha and SII in red, and OIII in green and blue, will have colors that are similar, overall, to a pure RGB image; neither is an accurate representation of what that object would look like from nearby, but both are reasonably accurate representations of the colors that object is sending our way.


Edited by WebFoot, 21 November 2019 - 07:52 PM.


#16 JamesTX

JamesTX

    Vostok 1

  • -----
  • Posts: 152
  • Joined: 27 Aug 2017
  • Loc: Texas

Posted 21 November 2019 - 06:26 PM

I’m a bit old fashioned so prefer more realistic colors. I can appreciate the brilliant hues narrowband imaging gives along with the advantage of being able to image under the moon, but . . . I much prefer my star fields to be mostly red with some blue, orange and white stars. I pretty much don’t care to see green stars or bright green anything. I like that some consider “colorizing” nebulae an art, but my preferences tend more towards the traditional. I know that light pollution is a modern reality, but my favorite images look more like what you would see with an unfiltered OSC under very dark skies.

I'd argue that stretching and applying curves to an RGB image is very much part of the "art" since its all done to taste.  More so if you are using masks or making any effort to create focus in your image.  Post processing is all subjective.



#17 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17315
  • Joined: 27 Oct 2014

Posted 21 November 2019 - 06:41 PM

Will someone explain to me why the Hubble Palette is the most used in astrophotography.  I get how it differentiates the narrowband wavelengths by assigning a color to them, but is the resulting color the true color of the object?  For example, the Horsehead nebula.  You can go to Astrobin and even search Google and find many different colorized images.  You can find very contrasting images that, while obviously the Horsehead, are totally different in color scheme.  The same goes for practically every DSO that people have taken a picture of.  Is all narrowband astrophotography subjective?  Does monochrome imaging with color filters provide true colorization of the object?  Do OSC cameras produce true color of the object?  In other words to kind of sum up my questions, what equipment and technique would I have to use to produce a true image of what a particular DSO would look like IF it were bright enough and close enough to be seen by the cones in my eyes without optical aid?  Or is there not a true color and the production of the image is totally subjective and, like Beauty, depends on the beholder?  Just curious...

 

Dennis

The Hubble Palette is pretty, and differentiates structure well.  Since there's no information about the vast majority of the visible spectrum in narrowband data, any assignment of narrowband data to RGB has issues.  Particularly since Ha and S(II) are both red.

 

The issue of what is "true" color is always controversial here.  I do not believe it's a useful concept.

 

I'm far from alone.  From the book "Lessons from the Masters", paraphrased.

 

"I used to carefully adjust color using G2v stars as a reference.  I was sure the color police were waiting to break down my door and confiscate my equipment.

 

These days I just do what I like."


Edited by bobzeq25, 21 November 2019 - 07:00 PM.

  • WebFoot and Astrola72 like this

#18 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17315
  • Joined: 27 Oct 2014

Posted 21 November 2019 - 06:51 PM

Thanks, everyone, for your input into this discussion whether factual or philosophical.  And thanks for the links to other material talking about this concept...I look forward to looking at and reading them.  So by and large it appears from what was said that the actual final images are mostly subjective in trying to capture the true nature of the object as close as possible while at the same time producing an aesthetically pleasing reproduction.  So if that is true, does that mean that learning post processing is really more important in DSO astrophotograhy than say using a particular piece of equipment or even the technique you use to obtain the image (for example, whether the narrowband images are all the same length of time per channel or different lengths of time)? Up to now I've come to believe that the emphasis of getting rid of noise in our images is practically the only reason for post processing and combining [stacking] those images equally to produce the most realistic image possible is how the process should be done.  But the above discussion (and thanks, Mark, for you fascinating comparisons of the different images) makes me think that utilizing different filtering techniques as well as different ways of combining those filtered images may produce images that are equally aesthetically pleasing as well as providing information about the object that you wouldn't necessarily know from looking at a "realistic" picture of the object.

 

Dennis

Postprocessing is more than half the game.  Many of us save our data (at least the stack(s)), and reprocess it after a couple of years, when we have gotten more skillful.

 

Check out my eight versions of this classic target, done over a couple of years.  There's no comparison between the quality of the first, and that of the last.

 

https://www.astrobin...list_page=2&nc=

 

This is why advanced imagers tend to prefer PixInsight.  The complexity (and the user interface) puts many off, but the complexity is _why_ it's the best.

 

If you choose to try it, I strongly recommend this book.  It's detailed enough to do some excellent work, but not comprehensive.  There's only so much you can do with PI in 300 pages.  Smiling, but not kidding.  I have hundreds of hours in learning and using it.

 

https://www.amazon.c...y/dp/3319976885

 

Key thing to know about processing.  It's like playing chess.  You don't go for the best result at each step.  You build the image up thoughtfully, step, by step, so that, in the end, you've achieved the best result.  If you change one step, you generally need to rework subsequent ones.

 

It takes me 10-50 hours to process an image these days.  And I have a really fast computer.  <smile>


Edited by bobzeq25, 21 November 2019 - 06:58 PM.

  • George Simon likes this

#19 sharkmelley

sharkmelley

    Mercury-Atlas

  • *****
  • Posts: 2510
  • Joined: 19 Feb 2013

Posted 21 November 2019 - 06:55 PM

Post processing is all subjective.

I disagree, it needn't be subjective. 

 

With unmodified DSLR cameras it is possible to process your astronomical images using an analogous workflow to terrestrial images and preserve accurate colour reproduction throughout the processing sequence into the chosen output colour space and finally to the display device.  In any case, as accurate as the Bayer matrix colour filters will allow, given they will never exactly fulfil the Luther-Ives criterion i.e. they don't perfectly match human vision.  Plus you are limited by the gamut of your chosen colour space and the gamut of your display device.

 

Generally speaking I choose not not to do this because I prefer my colours more saturated - that's where it becomes subjective.  But it's certainly an interesting academic exercise.

 

The planet Jupiter is an interesting example.  Do you ever wonder why the typical image of Jupiter looks so much more contrasty and saturated than it does visually through your telescope?  There are very few true colour images of Jupiter.

 

Mark


Edited by sharkmelley, 21 November 2019 - 07:15 PM.

  • AhBok likes this

#20 WebFoot

WebFoot

    Surveyor 1

  • *****
  • Posts: 1581
  • Joined: 02 Jun 2005
  • Loc: Redmond, WA, USA

Posted 21 November 2019 - 07:11 PM

I disagree, it needn't be subjective.  With unmodified DSLR cameras it is possible to process your astronomical images using an analogous workflow to terrestrial images and preserve accurate colour reproduction throughout the processing sequence into the chosen output colour space and finally to the display device.  In any case, as accurate as the Bayer matrix colour filters will allow, given they will never exactly fulfil the Luther-Ives criterion i.e. they don't perfectly match human vision.  Plus you are limited by the gamut of your chosen colour space and the gamut of your display device.

 

Generally speaking I choose not not to do this because I prefer my colours more saturated - that's where it becomes subjective.  But it's certainly an interesting academic exercise.

 

Mark

Not to get into a religious argument, but of course it's subjective.  The very act of taking an exposure that's more than a snapshot makes it subjective.  Your DSLR colors may (or may not; every lens, and every different chip, will have a different rendering of a specific set of data) be more "true" than someone else's rendition of that same object, but your rendition bears little-to-no resemblance to what that object looks like from close in (which is almost wholly lacking in visible color).  No, M82 doesn't look like an exploding firework if you were 20,000 light years away.  Heck, take a look at the LMC and SMC through binoculars next time you're in the southern hemisphere; you won't see much, if any color, and yet if you took a photo through your DSLR, it would be colorful, because you used a long exposure.

So, yes, it is not subject to reasonable dispute that we're all very significantly altering the actual appearance of DSOs; even DSLR imagers.

Mark



#21 AhBok

AhBok

    Vanguard

  • *****
  • Posts: 2385
  • Joined: 02 Dec 2010
  • Loc: Lakeland, TN

Posted 21 November 2019 - 08:21 PM

There are different degrees of subjectivity. Most stars are red, not green. Nebulae rich in Ha are more red than green. Hey, I like narrowband images, I just prefer more traditional coloring as identified by spectroscopy. I think many would agree and many would disagree with me. I’m good with that!

#22 sharkmelley

sharkmelley

    Mercury-Atlas

  • *****
  • Posts: 2510
  • Joined: 19 Feb 2013

Posted 21 November 2019 - 09:52 PM

Not to get into a religious argument, but of course it's subjective.  The very act of taking an exposure that's more than a snapshot makes it subjective. 

We are objectively taking the numerical values recorded by the DSLR and calculating the colours associated with those values using colorimetric science.  Just like when taking the photo of my family standing beside a national monument.  It's a clearly defined process.

 

Mark


Edited by sharkmelley, 21 November 2019 - 09:55 PM.


#23 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 17315
  • Joined: 27 Oct 2014

Posted 21 November 2019 - 10:14 PM

We are objectively taking the numerical values recorded by the DSLR and calculating the colours associated with those values using colorimetric science.  Just like when taking the photo of my family standing beside a national monument.  It's a clearly defined process.

 

Mark

What's not so clear is how the spectral distribution of this very dim light should be processed when stretching the data (which inevitably distorts color), given the nature of our eyes, and the nature of our display devices.  Not to mention adding luminance to RGB data.

 

Bottom line.  Given the whole system of going from linear digital data to a final image, there is unavoidably some subjectivity involved in processing.  Yes, when processing LRGB data, you should not make blue data look green, etc.  But arguing that one imagers process is "more real", while another imagers reasonable process is "not real" is, in my view, simply unsupportable.  For some reason (likely the fact that our eyes would simply see the target as gray) people often claim that low saturation images are more real.  I don't think that's a valid approach.

 

In my opinion, images should not be cartoons.  And, if you search M42 images on Google, you'll see plenty of cartoons.  But it's not useful to try to distinguish non-cartoons as more real or less real.  The system, taken as a whole, makes that task impossible.

 

I disagree that it's a "clearly defined process".


Edited by bobzeq25, 21 November 2019 - 10:15 PM.

  • WebFoot likes this

#24 dmilner

dmilner

    Explorer 1

  • -----
  • topic starter
  • Posts: 59
  • Joined: 22 Mar 2016
  • Loc: Searcy, Arkansas

Posted 21 November 2019 - 10:37 PM

>>>>>>> IF it were bright enough and close enough to be seen by the cones in my eyes without optical aid?

 

If you were close enough, it would be pretty much invisible.

 

Same thing with nebulosity. Were it not far away, and concentrated in an area, you would not see it. And the eye is not designed to bring out the color at all when visual observing. 

 

Now, if you analyze the constituents of the nebulosity (the elements and perhaps molecules) and the various wavelengths they are giving off you would have a characteristic color. The Hydrogen would be emitting a blue and mostly red. The Oxygen would be a teal, etc. Again, not enough to see. 

 

In practice, people making pretty pictures are free to give those emissions whatever color (and intensity) they feel makes the picture prettier. So, yes, that is arbitrary. However, most of us try to keep what was emitted as red as red most of the time. 

 

The Hubble team, however, assigns colors as necessary to make their points about composition of the target, and about aesthetics.

 

Alex

 

 

Not to get into a religious argument, but of course it's subjective.  The very act of taking an exposure that's more than a snapshot makes it subjective.  Your DSLR colors may (or may not; every lens, and every different chip, will have a different rendering of a specific set of data) be more "true" than someone else's rendition of that same object, but your rendition bears little-to-no resemblance to what that object looks like from close in (which is almost wholly lacking in visible color).  No, M82 doesn't look like an exploding firework if you were 20,000 light years away.  Heck, take a look at the LMC and SMC through binoculars next time you're in the southern hemisphere; you won't see much, if any color, and yet if you took a photo through your DSLR, it would be colorful, because you used a long exposure.

So, yes, it is not subject to reasonable dispute that we're all very significantly altering the actual appearance of DSOs; even DSLR imagers.

Mark

So visually practically every DSO is essentially colorless, but it is only the knowledge of their constituent elements/molecules which allow color to be added through narrowband imaging and is therefore a subjective process as to what color and intensity is allotted to each filtered image.  Is this statement true or am I still completely missing the concept?  IF that statement is true, how does pure RGB or the Bayer matrix in DSLR's produce a colored image since the DSO is essentially colorless?

 

Dennis



#25 WebFoot

WebFoot

    Surveyor 1

  • *****
  • Posts: 1581
  • Joined: 02 Jun 2005
  • Loc: Redmond, WA, USA

Posted 21 November 2019 - 10:53 PM

We are objectively taking the numerical values recorded by the DSLR and calculating the colours associated with those values using colorimetric science.  Just like when taking the photo of my family standing beside a national monument.  It's a clearly defined process.

 

Mark

And?  The fact remains, the picture you come up with bears little resemblance to the object out there.  It's very, very subjective.  CCD images are subjective.  CMOS, including DSLR, images are subjective.  They're all subjective.  If it's entirely done according to a script, that merely eliminates skill and esthetic taste as variables in the subjective process, but your image bears no more resemblance to the actual object of the photograph than does anyone else's.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics