Beginning imagers often wonder about whether color in our astrophotos is "realistic". The answer is that it's not, for any number of reasons. A fundamental one is that our eyes don't see color at all well in dim settings. Getting closer to the target in a space ship doesn't fix that, the overall brightness may increase, but the target gets larger, and surface brightness does not go up as fast. Our images are _not_ like getting close in a spaceship.
This will be unavoidably long and technical, and it only scratches the surface. Do read the first 6 (short) paragraphs, down to "here we go". After that, when your eyes (those imperfect tools) start to glaze over, just skip down to the last 8 (even shorter) paragraphs, beginning with "welcome back".
Definitions. "Better" color and images are ones that look better to you and/or others. "Natural" color is color you and/or others find matches your mental concept of what astro color _should_ look like. That's a subjective thing, color in our astrophotography is subjective in all kinds of ways, some covered here, some not. "Clever" means a method that produces a decent result, although not a perfect one.
There are three main reasons our color can never be perfectly "realistic". The first is maybe the biggy, the filters we use to sort out color destroy a very large amount of color information. How much depends on the filters, we'll spend some time on that. But it's always most of it. The methods we use to try and restore it are inherently flawed, when you have so much missing information, trying to reconstruct the reality simply has to be an approximation. What's gone is gone. Restoring it, in some ways, resembles building a perpetual motion machine. Entropy is involved.
The second is that both our eyes and the display methods we use are flawed. And they get a lot of use in the third area.
Third, the processing tools we use to try (often successfully) and bridge the gaps caused by the first two things are always imprecise, they could not be otherwise. In adjusting the processing we use the subjective and variable eyes and the imprecise display devices.
Here we go. Filters. Best case. A mono camera and good RGB interference filters. A vast amount of information is utterly destroyed. Take a deep red signal, and one that is not so "red". Say, radiation at 690 nm and at 600nm. Pass that radiation through a "red" filter and they both produce _exactly_ the same electrons. Tremendous information loss.
Worst case. Narrowband filters, like the set Ha, O(III) and S(II). Almost the entire spectrum has been lost. Both Ha and S(II) are red, which causes problems. As a result, imagers often totally give up on any attempt at natural color. Instead they use schemes like the Hubble palette to make images that are both pretty and tell a decent story about structure. They appropriately label those "false color".
In between. Doing LRGB imaging. This is a clever trick that uses the imperfect nature of our eyes. We see detail far better in black and white than in color. By spending some time taking black and white subs, we can produce better images in less time. That doesn't come for free. Adding L to RGB data inevitably dilutes color. We can process it to restore it decently. But never perfectly, because of the _very_ non-linear nature of our data. This is an interesting thread from the PI forum on that. A consensus (including Juan Conjeros, the guy behind PixInsight who knows a great deal about color) is that, particularly in light polluted skies, it produces better images faster, at the cost of less natural color. An interesting minor point is that he supports binning color (a technique that goes in and out of fashion), on the basis that once you accept the tradeoff, binning color doesn't make it much worse. So, once you've decided to do LRGB, you might as well save even more imaging time by binning color. I tend to agree.
Using one shot color through a standard Bayer matrix deserves special attention with regard to destruction of information. The underlying fact is that the Bayer matrix was, and is, designed to sell more cameras by both being cheap, and producing pictures that people subjectively prefer. Do those things well, and you sell more cameras.
Reality, dearie, has nothing to do with it. <grin>
Two things that are no problem for doing that with terrestrial pictures, are significantly bad for generating good color data (to the extent that any RGB filter can) in astrophotography. 50% of the pixels are devoted to green. Given the very low signal to noise ratio of our data, that's a problem. Tools to reduce excess green are _very_ commonly used in processing. That green is not real, it's an artifact emblematic of our imprecision. Electrons generated there are wasted.
Needless to say, the green rreduction tools are also imprecise. PI has a number of parameters that can be adjusted in SCNR, my PI course from the incomparable Vicent Peris, discussed why you might prefer to use one value or another.
While any camera can produce some excess green, DSLRs are on the "worse" end. And the colored glass filter is _way_ sloppy in parsing color. Now, not only has information about various shades of red been lost, RGB overlap to a degree that blurs even the separation between those. Going to mono plus RGB often surprises people when they find color processing is now easier.
Still in between, but tending toward narrowband. The very popular duo and triband filters recently released. They pass little information, but more than narrowband. Going more toward LRGB in terms of color data. Broadband light pollution filters. A mixed bag. The more powerful they are in rejecting light pollution, the more color information they destroy. The CLS is a good example, posts about color issues with a CLS are common. The less aggressive IDAS is often preferred by imagers who use broadband LP filters, for that reason.
I just try to stay away from broadband LP filters altogether, because I don't like the color issues, and I think other alternatives for dealing with light pollution are superior. It's not an outlier position.
The process by which we separate OSC data from a Bayer matrix filter into RGB channels is called Debayering. I imagine the data is quite relieved to be Debayered <smile>. But you don't want to do that simply, which would reduce resolution because we'd have specific red pixels, say, with gaps between them. Instead there are a variety of interpolation schemes for assigning, say, a red value to a green pixel. The clever trick works well for resolution, and passably well for color, (the eyes are helpful there) although it's imperfect. The best debayering technique is yet another debate.
So, we have darn imperfect information about the color of the light actually emitted by the target. The above sounds pretty bad. How can we possibly cope?
Generally a clever trick is used. All the various colors can be represented to our eyes as some numerical combination of R, G, and B. The way that's done is known as a "color space". There are a number of those, and people hotly debate which are more "realistic". A "white reference" is defined (more about the variability there later). A calculation is made as to what factors to apply to the R, G, and B levels to make the white reference white. Then those factors are applied throughout the image. We assume that will make everything realistic.
That assumption is not terrible, but it's inherently flawed. Various things mess up the process. The factors will differ according to the specific circumstances in various parts of the image. How much is a truly impossible calculation, so we can't really "fix" this.
Here's what PixInsight has to say about color calibration. (I trust people agree they know something about this stuff?)
"Our approach originates from the fact that —in our opinion— the concept of real color makes no sense in deep-sky astrophotography. Real color doesn't exist in the deep sky because, on one hand, the objects represented in a deep-sky image are far beyond the capabilities of the human vision system, and on the other hand, the physical nature, properties and conditions of the deep-sky objects are very different from those of the subjects that can be acquired under normal daylight conditions."
"The image(s) must be accurately calibrated <before color calibration>. In particular, illumination must be uniform for the whole corrected image and, if different images are used to define the background and/or white references, those images must also have uniform illumination throughout the entire field. This means that flat fielding must be correctly applied as part of the image calibration process, and any residual additive gradients must also be removed before attempting to perform a valid color calibration."
So, in order to do a "valid" color calibration you have to do perfect flats and perfect gradient reduction. Volunteers? <smile>
Side note. White balance in terrestrial work is a variation on the theme. Each white balance in the camera is a preprogrammed set of RGB ratios that is judged (approximately), to give proper color representation when the subject is lit by various forms of light, such as fluorescent tubes. The reason white balance is not terribly useful (a notable exception will be discussed later) is that there are better ways to color calibrate (ie set the ratios) in our work.
Choosing a white reference involves some subjectivity (more on that in a bit.) In processing we sometimes change the color space we use (or our processing program does it behind the scenes), and those transformations introduce uncertainty in color.
Again, all this sounds bad. The reason it's not that bad is that we can (and do) tweak the tools and their parameters in what we do. The tweaks are not mathematically based on science and reality, they are based on making the images look better and more natural to our imperfect, and individually variable eyes, working with an imperfect monitor. So, the process works. Or at least it can work, skill (dare I say artistic skill? <smile>) counts for a lot.
The result is that you can make nice images with any of this equipment and any of these techniques.
A good example of tweaking the parameters is how we choose a white reference. These vary widely. PixInsight, a program intended to be as scientifically rigorous as possible, lists many of them in the well known PhotometricColorCalibration process. Which one to use is a subjective choice, often debated by the very best imagers.
All this is intended to make our images congenial to our output devices, and our eyes. Monitors vary wildly in their ability to reproduce color spaces. The buzzword is "gamut". Ink jet printers often use a CMYK color space, which is radically different from RGB. Making printed images look good involves tweaking the parameters in processing a whole lot.
And it all makes a hash of "reality". (Not that we could ever claim to be "real" after destroying so much information). The more you learn about this (this post just skims the surface, there's a "Color in Astrophotography" book waiting to be written), the more you realize how futile it is to chase reality.
At some point people will understandably ask. Don't professional astronomers use color to get information about objects? They do indeed, the field is called spectroscopy. I've been a spectroscopist (terrestrial), my PhD thesis is on the subject. It's informational to know what spectroscopists _don't_ do. They don't use filters that inevitably destroy large amounts of information. They use a variable spectrometer that parses color into quite small slices, often the narrower the bandpass of the spectrometer, the better. They hoard information, instead of destroying it. They don't use their eyes, or even make pictures, they just use the numeric values of the data. They don't do the things that make the concept of "reality" in our images dubious.
Three sidebars about DSLRs (and OSC cameras in general) deserve special note. sharkmelley has discussed using a 3X3 matrix to better calibrate DSLR data. While I don't consider it "scientific" I do think it's one way to make DSLR images better and more natural. I don't know of a source for the matrices for astro caneras, but maybe the ones for DSLRs using the same chip would work. Jerry Lodriguss sometimes images with a broadband ligh pollution filter on top of a Bayer matrix filter. I shudder at that thought, to me it's like having a headache _and_ an upset stomach. <smile> But he has utilized a "custom white balance" procedure, that is not hard to do, and can give quite good and natural images in that quite hostile environment. I used to look at it with dubious eyes, now (having more knowledge about color) I think he has a point.
Thing is, I doubt there are 1% of people here who've even tried either technique.
What people _are_ doing is putting Dual or Tri band filters on top of Bayer matrix filters. These are becoming quite popular (although it should be noted that they only really work on emission nebulae). But they make a lot of sense. The filters have _very_ restrictive bandpass. Not narrowband restrictive, but close. If you block off that much of the spectrum, the issues of Bayer matrix filters become a whole lot less important. Stacking filters is more inefficient and requires more total imaging time, but the resulting images can be quite nice.
Welcome back. I'll repeat. The more you learn about this, the more you realize how futile it is to chase reality. Here's two very relevant quotes (paraphrased) from the superb "Lessons from the Masters: Current <2013> concepts in astronomical image processing".
"Color saturation is an invention of our images. So saturate however much you like."
"When I started out I color processed obsessively. I calibrated carefully to G2v stars <as a white balance>. I was nervous, certain that one day the Color Police would break down my door, and confiscate the equipment. These days I do what I like."
The phrase "pretty pictures" is often used in a derogatory sense. I don't see it that way at all. Color processing is always imprecise, why not make the results pretty? Or better. Or more natural. Whatever your priorities are.
It certainly is possible to make colors so unnatural that the resulting images could be called cartoons. I don't like cartoons. But I note that images from people who are successfully selling them on the Internet often trend in that direction. At the very least, saturation will be strong. Unless you differentiate yourself, how can you sell to the general market?
You generally want to make better images. It is very useful to study methods, to solicit advice. But know that you'll always get people who like their color better than yours. Take criticisms of your color processing (especially those citing "reality") with an attitude of empowerment. If you agree, and want to change what you do or did, fine. If you don't, fine. The Color Police can (and will) criticize your images, but they're not going to confiscate your equipment. <smile>
The bad news here is that our color is not realistic. The good news is, once, you embrace that fact, you're free to do better or more natural images. Or (oh, the horror! <smile>) to make pretty pictures.
Advice to beginners. Do not tilt at the windmill of reality, or worry about it. Better or more natural or prettier are hard enough (which to emphasize how much is a personal choice), don't waste time on the impossible.
Edited by bobzeq25, 03 December 2019 - 11:41 AM.