Jump to content


Photo

Do we need Luminance for imaging? or only RGB?

  • Please log in to reply
56 replies to this topic

#1 kfir Simon

kfir Simon

    Imager extraordinaire

  • -----
  • Posts: 882
  • Joined: 21 Sep 2010

Posted 13 February 2013 - 06:16 AM

Hello everyone.

From time to time I get the feeling that maybe LRGB imaging is not necessary - and only RGB is sufficient enough - meaning no need for Luminance!

I know there are some imagers that image Nebulae only in RGB without Luminance and get great results.
My traditional way was to image Lum in Bin 1 and RGB in Bin 2 to color the Luminance.
With this in mind I took this image of M42 only in RGB (each channel for 9 minutes) - and took a separate Luminance image of 9 minutes to see if it really matters.

To my surprise - the Lum didn't add at all to the final result, on the contrary - it made the beautiful colors of the RGB wash out and didn't add to the details.
Therefore I decided to keep only the RGB image.

Please see here:

http://www.pbase.com...image/148764474

Some thoughts:
Does RGB imaging (Without Lum) work only on bright Nebulae?
Since we need the color data - isn't it better to image only in RGB and extract the LUM from it? (Since LUM is R+G+B)

I would like to hear your opinions and/or experience on the subject!

All the best,

Kfir

#2 Nicola

Nicola

    Vendor: Skymonsters.net.

  • -----
  • Vendors
  • Posts: 2102
  • Joined: 23 May 2006
  • Loc: Milan, Italy

Posted 13 February 2013 - 06:25 AM

Kfir, your thoughts are correct. RGB is enough provided:

- you shoot a total integration time for RGB that you would do with LRGB
- you have a fast f/ratio scope (and this seems to be your case)

People imaging with an f/10 will probably stick with LRGB binning 2x2 for colors.

But..it depends also on the object. While an RGB will fully describe an emission nebula, it will loose some info with reflection nebulas/dust clouds/galaxies as the total spectrum is a continuum.
BTW: With a fast scope such as yours I would not bin IMHO.

#3 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 13 February 2013 - 08:11 AM

I think the big benefit to LRGB is that you can do all the processing on just one layer "L". You really do not need to spend much time at all processing RGB if you do L. If your results were the same, it's probably because you did spend time on RGB, which, in my experience just isn't necessary, or even correct (for the reasons you mention) when adding an L layer, as the L will add all of the processed detail. The only thing you really need from the RGB is color, you do not need to spend time bringing out the detail in RGB, it eill, as you experienced look funny because when you do stretch and sharpen the RGB layers to bring out details, it changes the range, noise, saturation etc.. in ways not really compatible with the L layer.

Also, it is easier to see contrast and subtle detail in B/W, although this would apply to RGB if done separately in greyscale.

To me though, the magic is the L layer, that is where everything happens and really helps the processing workflow if used correctly.

#4 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3972
  • Joined: 12 Oct 2007

Posted 13 February 2013 - 08:40 AM

I think a key misunderstanding about LRGB is that it does not, in fact, somehow "increase SNR". Instead, it just lets you make a more visually pleasing image by spending more time on luminance than color - because human color perception is not sensitive to high spatial frequency.

So - unlike noise models and so forth, you won't find LRGB used in professional work - because it only applies to aesthetic imaging - not quantitative scientific work where SNR is the primary concern.

So - the "theory" behind LRGB is that a high res L combined with quick, low res color should look great - but it never worked that well for me because it doesn't look as natural as a normal RGB - especially for something like a star cluster. And if people want to muck around with the stars and colors with heavy processing - they can do that with RGB anyway.

So my impression is that LRGB is less fashionable nowadays - especially with binned color since binning may not have much win with LRGB.

But what matters is how it looks after you process it the amount you want to process. Theoretically LRGB should look better than RGB with a given amount of imaging time - but I think it depends on personal taste and personal amounts of processing.

Frank

#5 Bob Gillette

Bob Gillette

    Mariner 2

  • -----
  • Posts: 219
  • Joined: 29 Aug 2008
  • Loc: New Hampshire

Posted 13 February 2013 - 08:42 AM

Anyone Imaging with an OSC camera is shooting RGB, unless you take the added step of separating the channels and creating an artificial Lum, uwhich I suspect most of us don't.

I do both OSC and mono. On bright objects like M42, I find, as you did, no significant difference. But on fainter objects, I do see higher resolution and greater sensitivity with the mono Lum.

Try your experiment on a mag 9-11 galaxy and seen if you perceive a difference with the same total integration time.

Bob

B

#6 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 13 February 2013 - 10:44 AM

Just to add on one thing, I think I was not clear about. When you enhance contrast of the RGB layers and process them, you effectively add black and decrease the RGB signal in some areas. Right, that is what contrast is, a change between intense and less intense signal. So, the now darker areas, unless they perfectly match the dark areas of the L, will effectively cancel out some color because while the L may illuminate it, but there is no longer color there. And vice versa. So, you actually are usually better off with fairly low contrast on the RGB layers. I've found I get very satisfactory results doing little more than applying a mild Gaussian blur to the RGB, to remove some noise and adjusting the intensity to merge well. Going much further than this with the RGB layers usually results in disappointment in my experience. But it is preferable to focus on the L too, as then you get a sense of how everything fits together, in one layer. Trying to process the RGB separately, you don't really get that sense until you merge them. Once you merge them, and start processing, then your basically just using an expensive OSC with extra steps..

#7 BlueGrass

BlueGrass

    Surveyor 1

  • *****
  • Posts: 1980
  • Joined: 25 Jul 2009
  • Loc: Wasatch Front, UT

Posted 13 February 2013 - 10:13 PM

... but shooting RGB with a mono vs. OSC gives you much more control over how much of each channel to capture or add to the final image. OSC always captures more green if present than red or blue and that needs to be removed or adjusted in post processing.

#8 bill w

bill w

    Voyager 1

  • *****
  • Posts: 10655
  • Joined: 26 Mar 2005
  • Loc: southern california

Posted 13 February 2013 - 11:46 PM

yes
in my experience
mid to long focal length with poor transparency and light pollution.
makes a huge difference especially for faint galaxies and reflection nebulae. 3 times the signal per unit time.
less so for objects that are largely mono or bichoromatic emission nebulae.
need to be careful with LRGB combine that you have enough RGB. takes some skill. suggest RGB sub exposure length be 3x luminance. can combine this data with luminance with some improvement in background smoothness
your mileage may vary

#9 SL63 AMG

SL63 AMG

    Viking 1

  • *****
  • Posts: 864
  • Joined: 21 Dec 2009
  • Loc: Williamson, Arizona

Posted 14 February 2013 - 12:21 AM

BTW: With a fast scope such as yours I would not bin IMHO.


If the seeing is good, I don't bin my RGB subs, I shoot LRGB all binned 1x1. In fact, I am imaging M78 RGB right now at F/3.6 binned 1x1. The subs look great.

I have a fast scope, clear dark skies and good seeing.

I never really thought about not using Luminance data. Adam Block teaches to use deconvolution on almost all galaxy luminance masters. I wonder if deconvolution is necessary anymore if processing the luminance with PixInsight's HDR wavelets.

It's something to think about and discuss.

I look forward to hearing from the experts.

#10 alpal

alpal

    Soyuz

  • -----
  • Posts: 3665
  • Joined: 15 Jun 2009
  • Loc: Melbourne Australia.

Posted 14 February 2013 - 12:29 AM

The Luminance has a higher signal to noise ratio than any particular colour of RGB.

Therefore in the faint areas of the picture -
luminance will enable us to see the low s/n data which would otherwise be invisible in the noise.

The only test would be on a very faint object such as a galaxy & in particular it's faint arms.

#11 pfile

pfile

    Gemini

  • -----
  • Posts: 3168
  • Joined: 14 Jun 2009

Posted 14 February 2013 - 12:52 AM

i still do deconvolution on my L; it's the most "pure" way of sharpening the image. after DBE the next step is deconvolution thru a mask, to protect the low-SNR areas. then it's off to nonlinear-land...

#12 Leonardo70

Leonardo70

    Apollo

  • -----
  • Posts: 1219
  • Joined: 16 Apr 2009
  • Loc: Turin - Italy

Posted 14 February 2013 - 05:22 AM

Hello Kfir, my opinion (with in mind your fast scope) :

- R+G+B isn't equal to L due to the filters cut
- L is necessary for faint objects and better SNR (now i use always after one year i used a RGB only)
- Colors become washed if you work separately on L and RGB (mix togheter as soon as possibile) and work on it, better to use an L-LRGB approach
- Don't use a bin2 for color (you are fast and don't need)
- Use Sum instead of Mean for color stacking (i also use for L) , you'll gain more dinamic when stretching it according the luminance. This mean to use a good rejection tools.

All the best,
Leo

#13 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3972
  • Joined: 12 Oct 2007

Posted 14 February 2013 - 07:30 AM

L is necessary for faint objects and better SNR



This is typically how LRGB is described, but the problem is that in color imaging, you just have R, G, B signals - and you want each signal accurately recorded and with low noise. If you multiply L by R to get a "better" R - you may get an R channel that looks better - but the signal has been corrupted by mixing in G and B data from the L, into R.

So L has more detail and so forth - but it doesn't do anything to improve the SNR of the color channels. One text that gets this right is Berry/Burnell's handbook where they say LRGB makes for a better looking image with a given total exposure time - by combining a good looking L channel with low res color. It's a perceptual thing - not an SNR thing.

If LRGB really did improve SNR, it would be used professionally to capture better data in a color channel. It's just used for making nicer *looking* images in a given time - and even then, some people like its results and others don't.

Frank

#14 Ken Crawford

Ken Crawford

    Mariner 2

  • -----
  • Posts: 272
  • Joined: 02 Jun 2009
  • Loc: Camino, CA

Posted 14 February 2013 - 10:52 AM

The luminance channel is better for sharpening systems like Decon and others.

One of the ways to improve the total signal is if you take unbinned RGB data, you can combine the RGB into a synthetic Luminance then combine your deep luminance and the RGB luminance together for a combined signal Luminance. This method is used by Jay GaBany and myself for the professional Star Stream Team survey to enhance the contrast of the very faint streams. This method we us has been explained in several AJ papers that we have produced on the subject.

That way you are not just using the RGB result to colorized the Luminance, you are using the RGB + L combined to produce a better result.

You can then use the RGB result to colorize the "Super L" and use the standard L to Decon and apply as a sharpening layer.

#15 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 14 February 2013 - 11:09 AM

So L has more detail and so forth - but it doesn't do anything to improve the SNR of the color channels.


It seems it is improving "luminance SNR" though, especially if used in a way such as Ken described. However, perhaps skewing the color rendering or "color SNR". It sort of depends what "signal" we're interested in I guess. From an EE perspective (I/m not an EE, so, could be wrong), I think usually signal is consider as both intensity and frequency/wavelength, so, then in that view, perhaps it would be fair to say it improves sampling of the intensity at the expense of wavelength sampling?

Edit: I changed "frequency" to "intensity" above

#16 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3972
  • Joined: 12 Oct 2007

Posted 14 February 2013 - 11:20 AM

If you are taking L and also taking R, G, B, then sure you can improve the SNR of your L by summing the RGB into the L - assuming L is really equal to the sum of R, G, B. But the result is a higher SNR L - not a higher SNR R, G, B. If you just wanted a high SNR L, you would have been better off not exposing R,G,B at all and instead do all as L.

In the AJ2010 paper - which I assume is representative of what you are referring to, as far as I can tell the RGB info is only used to make a reference color image to enhance the aesthetics of the figures - but the analysis is based strictly on L. If you have a paper that describes the SNR improvement of the R,G,B channel information by combining with L I'd be happy to take a look at it.

Frank

#17 Ken Crawford

Ken Crawford

    Mariner 2

  • -----
  • Posts: 272
  • Joined: 02 Jun 2009
  • Loc: Camino, CA

Posted 14 February 2013 - 11:40 AM

The RGB Sum + L was used to produce higher contrast in the streams but the Lum only was used for the professional measurements and analysis. You are correct that you could keep going longer in L only for the "best" results but the idea was to use the RGB signal since we have it to improve contast in the streams - which we do for display only. This way the RGB signal was not just used to tint the Lum channel. In other words, we have the data we might as well use it!

The Max-Planck guys measured my L only channel on NGC4216 down to 29.3 mags / sq/ arcsec and the combined RGB + L gave a higher s/n ratio of the streams.

#18 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3972
  • Joined: 12 Oct 2007

Posted 14 February 2013 - 11:47 AM

This way the RGB signal was not just used to tint the Lum channel. In other words, we have the data we might as well use it!



Sure - that makes perfect sense. But people think that LRGB is a way actually to improve the color accuracy of an image - by somehow increasing the SNR of each R,G,B channel. I claim this is incorrect, and cite Berry/Burnell as a source that gets it right. LRGB relies on a perceptual trick to make a color image look better - by spending more imaging time on luminance than color. The result just looks better - but the inherent color accuracy or "SNR" has not been improved.

So people shouldn't feel there is an inherent SNR win by doing LRGB vs. RGB. But there is a chance the end result would be perceived as higher quality and look better - or not.

Frank

#19 Ken Crawford

Ken Crawford

    Mariner 2

  • -----
  • Posts: 272
  • Joined: 02 Jun 2009
  • Loc: Camino, CA

Posted 14 February 2013 - 11:50 AM

Frank,

See page 5 of this paper that describes the RGB combined method.

http://www.cosmotogr...ngc5907_def.pdf

I joined the team after this paper and is where I learned the method. Once it was described, we did not have to keep talking about it. But as you see, the combined data was used and we still will use now for contrast enhancement.

We just take the data and carefully process it to specs, we don't do the analysis - the PI and other pros do.

Regards,


Regards,

#20 Ken Crawford

Ken Crawford

    Mariner 2

  • -----
  • Posts: 272
  • Joined: 02 Jun 2009
  • Loc: Camino, CA

Posted 14 February 2013 - 11:53 AM

This way the RGB signal was not just used to tint the Lum channel. In other words, we have the data we might as well use it!



Sure - that makes perfect sense. But people think that LRGB is a way actually to improve the color accuracy of an image - by somehow increasing the SNR of each R,G,B channel. I claim this is incorrect, and cite Berry/Burnell as a source that gets it right. LRGB relies on a perceptual trick to make a color image look better - by spending more imaging time on luminance than color. The result just looks better - but the inherent color accuracy or "SNR" has not been improved.

So people shouldn't feel there is an inherent SNR win by doing LRGB vs. RGB. But there is a chance the end result would be perceived as higher quality and look better - or not.

Frank


Agreed !! It does not help the color channels at all. I have found however that with today's processing tools, you will never have a problem pushing color as long as you have enough of it :)

Great stuff Frank!

#21 Inverted

Inverted

    Mariner 2

  • *****
  • Posts: 210
  • Joined: 19 Jan 2013
  • Loc: LP Land

Posted 14 February 2013 - 12:20 PM

First, by the way, I modified my post above a bit, I realize really I meant to say "intensity" when I said frequency.

Anyways, just getting back to SNR, as I find this metric fascinating, I guess, I still haven't been convinced that SNR is really the best metric to describe image quality as we talk about it. For one, as mentioned in the exposure length thread, it ends up being a biased estimate, if skyglow goes up for example, it can actually improve on paper.


But also, regardless, pretending "observed SNR", did actually measure "true SNR" perfectly, the whole idea of talking about SNR as a metric, even under perfect conditions seems to fail somewhat. This is because when we do talk about 'image quality" we are talking about our perception of quality, not just means and standard deviations in the data. When we're talking about luminance, the means and standard deviations do seem to make more sense because of the way our eyes perceive grey scale and metrics used to describe luminance take into account the weighting of our eyes.

When we start talking about color though, this changes significantly because we are not considering the weight anymore. From a scientific perspective, it seems that we do want to know the means and standard deviations of the colors. However, our eyes can can be 20x or more, more sensitive to say green than red or blue. So, when we start talking about color, I think that there is a bigger discrepancy between the numbers and our perception. The luminance takes into account our perception, so, we can use that to boost our perception of the data and make a "better" image, although, as mentioned from a pure numbers perspective, it probably isn't lower variance, i.e. higher SNR.


So, when we start talking about color data, it seems we should almost use a luminance weighted estimate of the SNR. Does that make sense to others? And does anyone already use such an estimate?

Edit: also by the way, how do most programs measure SNR when using color data. For example, do they only look at variance of pixel to pixel intensity and ignore color? I think so, as when in Maxim DL, when I put the little curser over part of the image, it doesn't give me seperate L, R, G and B SNRs. So, when your talking about improvements to SNR etc... to lay ears such as mine, it may help to know the specifics of how it is being measured. If we are just measuring luminance SNR, then it seems it would improve with extra, separate L data. And if we're discussing SNR, with regards to what are we referring, just the measured value, or the theoretical, underlying value?

#22 freestar8n

freestar8n

    Soyuz

  • *****
  • Posts: 3972
  • Joined: 12 Oct 2007

Posted 14 February 2013 - 12:53 PM

Thanks Ken - That all looks good to me. I think it's great you were able to incorporate the color channels to improve the depth for the scientific goals of the project - and at the same time get nice color images of the galaxies to provide context for the star streams.

As for LRGB - I'm a believer that it *should* work to make a nicer looking result with more detail and better colors - but I think a key problem for me is slight spherochromatism in my c11 with reducer that makes it hard for the star sizes in different colors to match exactly so the end result looks natural. I assume I could mess with it in processing - but my main message is for people to give it a try, but it may not work as automagically as hoped - and there is no *inherent* improvement in the actual color SNR.

Thanks,
Frank

#23 Ken Crawford

Ken Crawford

    Mariner 2

  • -----
  • Posts: 272
  • Joined: 02 Jun 2009
  • Loc: Camino, CA

Posted 14 February 2013 - 01:24 PM

Thanks Ken - That all looks good to me. I think it's great you were able to incorporate the color channels to improve the depth for the scientific goals of the project - and at the same time get nice color images of the galaxies to provide context for the star streams.

As for LRGB - I'm a believer that it *should* work to make a nicer looking result with more detail and better colors - but I think a key problem for me is slight spherochromatism in my c11 with reducer that makes it hard for the star sizes in different colors to match exactly so the end result looks natural. I assume I could mess with it in processing - but my main message is for people to give it a try, but it may not work as automagically as hoped - and there is no *inherent* improvement in the actual color SNR.

Thanks,
Frank


I even get some star size variances from seeing differences from night to night with my RC. I normally images several nights on a target. What I do is a very mild positive constraint Decon on the two largest FWHM color channels to push them down to the smallest of the color channel. I am talking very mild 10-20 iterations with a proper PSF. I then find my color fringing is less in the RGB master.

I also stretch around a star mask to keep the color from blowing out the Lum margins.

This is really good stuff Frank, your comments gives me inspriation for this hobby as I have not been as active as I would like to be on the imaging side. I learn from these types of interactions and gain more fire in the belly :)

Kindest Regards,

#24 alpal

alpal

    Soyuz

  • -----
  • Posts: 3665
  • Joined: 15 Jun 2009
  • Loc: Melbourne Australia.

Posted 14 February 2013 - 09:49 PM

The Luminance has a higher signal to noise ratio than any particular colour of RGB.

Therefore in the faint areas of the picture -
luminance will enable us to see the low s/n data which would otherwise be invisible in the noise.

The only test would be on a very faint object such as a galaxy & in particular it's faint arms.


I tested the above theory out on some actual single frame RAW data from galaxy NGC 253.

I used a single Luminance frame & a Green frame.
Both frames were 3 minute exposures under light polluted skies with no ALP filter.
I cropped each of them out of the full size pic.
I then pasted them onto a new pic.
I stretched both of them equally as one pic.
I then stretched the Green version ( on the right hand side )
so that the background was equalised with the luminance frame. ( I chose 40 with the eyedropper in PS curves )
The area outlined as a box is then expanded later for the next pic to view right down at pixel level.

Attached Files



#25 alpal

alpal

    Soyuz

  • -----
  • Posts: 3665
  • Joined: 15 Jun 2009
  • Loc: Melbourne Australia.

Posted 14 February 2013 - 09:51 PM

Next is the expanded pixel version of just the box.
Notice the faint arms are hardly visible in the Green version on the right hand side.
The faint arms are lost inside the noise.

Attached Files








Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics