Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

LRGB versus RGB - a fascinating debate at the highest level

  • Please log in to reply
113 replies to this topic

#1 bobzeq25

bobzeq25

    Hubble

  • *****
  • topic starter
  • Posts: 19,671
  • Joined: 27 Oct 2014

Posted 27 April 2017 - 10:06 AM

An innocent post from someone fairly new, in the CCD forum, about whether L was necessary led me to look at a disagreement among the best imagers and best thinkers about imaging (which I was somewhat aware of) more closely.  I'll paraphrase the main positions, Jon can correct me if I've got his wrong.

 

Note that this assumes a mono camera, and either 3 filters or 4.  No narrowband or light pollution filters, either.

 

LRGB is (maybe with exceptions) fundamentally a better way of gathering data.  It is (almost?) always superior to RGB.  Jon Rista

 

RGB is fundamentally a better way of gathering data.  While LRGB can get you a good image in less time, it's a shortcut and there's a ceiling for quality.  If you're willing to be less efficient with imaging time, RGB can break through that ceiling.  It's the method to use when ultimate quality is the goal.  Juan Conejero (author and developer of PI), almost certainly others.

 

freestar8n appears to just like RGB better, period.

 

Note that considering time efficiency as a value blurs the debate.   For those of us in the less rarified world, that can trump everything.  So please do not consider it in this thread.  Start your own.  The same applies to adding narrowband filters to the mix.  Or using a light pollution filter instead of a simple UV-IR block for L.  Keep it simple and fundamental, that's complicated enough.

 

Is LRGB fundamentally better for quality, or is RGB fundamentally better for quality, assuming you're willing to be less efficient with time?

 

Opinions?

 

Here's the original thread, but my posts were just not this crisp, I floundered some.

 

https://www.cloudyni...nd-rgb-filters/


Edited by bobzeq25, 27 April 2017 - 10:35 AM.

  • ks__observer likes this

#2 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,406
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 27 April 2017 - 10:24 AM

First, when you say RGB, I'm assuming you're talking about one-shot RGB, not 3-filter mono RGB - is that correct?

 

Well, don't know how much my opinion is worth since I don't do LRGB. :)  But I would think LRBG would be superior even disregarding efficiency.  With LRGB, you're using all the pixels rather than the subset you use due to the Bayer matrix on an OSC, so you'll get better resolution, assuming you're not way oversampling anyway.  And the LRGB filters have sharp cutoffs rather than the wide overlapping spectrum you see in a Bayer matrix, so I would think the colors would be more accurate.  Finally, the L channel gives you a lot of hi resolution data quickly to boost the sharpness of the final image.

 

I think there's a lot you can do with an OSC camera, and they can produce images that are not far behind LRGB images, but it seems like you'll always have an edge with LRGB.  

 

Also, the quality of your images depend on a lot of things, not just your camera type.  If your overall quality is limited by other factors such as the quality of your skies, your guiding, or your optical equipment, the gains you get from going to LRGB may not be too evident.  But if everything else is up to snuff, I think LRGB will make a noticeable difference.

 

-Dan



#3 bobzeq25

bobzeq25

    Hubble

  • *****
  • topic starter
  • Posts: 19,671
  • Joined: 27 Oct 2014

Posted 27 April 2017 - 10:29 AM

Sorry, I absolutely meant mono camera, and 3 RGB filters, or mono plus RGB filters plus L (generally a UV-IR filter).  I've made the clarification.

 

You're 100% right about mono plus filters versus OSC.

 

Thanks for bringing this up, it was needed.


Edited by bobzeq25, 27 April 2017 - 10:37 AM.


#4 buckeyestargazer

buckeyestargazer

    Vendor - Buckeyestargazer.net

  • *****
  • Vendors
  • Posts: 5,112
  • Joined: 12 Jan 2008
  • Loc: IN, USA

Posted 27 April 2017 - 10:50 AM

popcorn.gif This could get interesting.

For my part, I have experimented with RGB but came to the conclusion that LRGB is the right way to go for me and probably for many of us.  The killer for me was light pollution causing different gradients across the different filters. I found it much more difficult to deal with varying gradients in the straight RGB than dealing with those same gradients but adding a LUM to the data.  When dealing with some level of light pollution the LUM seems to smooth out the background better and it's easier to deal with mono gradient than with color gradient.  

 

I almost always feel that my LUM image looks fantastic, but that my RGB image looks like crap.  And I have tried getting the same amount of comparable data for each of the RGB filters as the LUM (like say 3hrs for each filter, so when combining the RGB the resulting image should have the same signal as LUM).  Despite the same amount of exposure time the RGB looks like junk but the LUM looks great.  

 

Given the above reason, RGB may be preferable for dark skies where gradients are less of an issue.


  • astronate likes this

#5 Jerry Lodriguss

Jerry Lodriguss

    Vendor

  • *****
  • Vendors
  • Posts: 7,387
  • Joined: 19 Jul 2008
  • Loc: Voorhees, NJ

Posted 27 April 2017 - 11:26 AM

 

Is LRGB fundamentally better for quality, or is RGB fundamentally better for quality, assuming you're willing to be less efficient with time?

Hi Bob,

 

What metric do you want to use to quantify "fundamentally better for quality"?

 

You have to be specific on a question like this.

 

Jerry



#6 jgraham

jgraham

    ISS

  • *****
  • Posts: 20,822
  • Joined: 02 Dec 2004
  • Loc: Miami Valley Astronomical Society

Posted 27 April 2017 - 11:30 AM

I always felt that I got better results with LRGB than RGB, but 'better' is a matter of small increments and it often comes down to processing skills and technique. I've been flogging away at modern digital imaging for 14 years now (and film for 35 years before that) and I still learn something new almost every time I take and process a new set of images.

 

What a wonderful hobby! There is so much to explore and to learn.



#7 TareqPhoto

TareqPhoto

    Fly Me to the Moon

  • -----
  • Posts: 5,282
  • Joined: 20 Feb 2017
  • Loc: Ajman - UAE

Posted 27 April 2017 - 11:35 AM

Filters topic is getting popular and making some serious discussions around, it became like "to filter or not to filter", and then " which filter" and "is this a good filter".

 

I have to work hard with my DSLR regardless what i may lose without a mono camera, i saw so many nice shots done by DSLR, but i did say that before that the only or the main reason made me to start AP was seeing photos done by mono cameras and i want to be there, using DSLR sure will be only for learning to me not for getting what i look for, so if what i look for will be done with something else then why can't i get that to learn with?

 

I am still debating myself with either going with LRGB filters or NB filters, i can have both later one by one, but i don't know which to start with, so if i will use DSLR it means i should pass RGB for now and start with NB, right? and L isn't it that UV/IR filter?



#8 Bigdan

Bigdan

    Viking 1

  • *****
  • Posts: 889
  • Joined: 17 Dec 2014
  • Loc: Panacea, FL

Posted 27 April 2017 - 12:03 PM

I do LRGB and I have PI.  When the Spaniard says that LRGB is a shortcut.... I'd like some clarification as to exactly what he's saying.  Is he saying that to get the detail that applying the L-channel provides, without actually acquiring L-sub's and not processing them, that it takes more time to get that level of detail by using just RGB?  Are we discussing many more RGB sub.'s, and additional processing time to match the same level of detail?

 

If that's what he's saying..... my question would be, "Why?"  Why do that?  It sounds like he's saying luminescent is a shortcut that will yield inferior images.  Time is our most valuable asset when imaging.

 

Edit:  OK, yes, he's saying RGB only yields higher quality, but takes more time.  My time is too valuable for that.  Using luminescent is a valuable tool.


Edited by Bigdan, 27 April 2017 - 12:07 PM.


#9 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,406
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 27 April 2017 - 12:21 PM

Sorry, I absolutely meant mono camera, and 3 RGB filters, or mono plus RGB filters plus L (generally a UV-IR filter).  I've made the clarification.

 

You're 100% right about mono plus filters versus OSC.

 

Thanks for bringing this up, it was needed.

Ah, thanks for the clarification.  In that case, I have no opinion whatsoever! :grin:

 

-Dan



#10 gunny01

gunny01

    Vanguard

  • *****
  • Posts: 2,219
  • Joined: 02 Jun 2014

Posted 27 April 2017 - 12:28 PM

  I think that the lum is valuable for many circumstances.  I've seen additional detail with lum, that for whatever the reason, seems to outdo my rgb combo.  On the negative side, the lum master usually suffers from larger stars (even after deconvolution) and an Ha master can go a long way to improve the situation.  I'll often use Ha for the stars in the lum image if the Ha data is available.  Layers use goes a long way in PS for this purpose.  The additional detail in the lum image can then be brushed in or % layer to really bring out detail.  I can't do this with PI.  Some may disagree with this opinion, but it has worked well for me.

 

  You can open another can of worms by getting into the controversy of 2x2 vs. 1x1 binning with the rgb channels also.  I've totally dropped binning the color channels and it seems to work well...........Gunny


  • scopenitout likes this

#11 bobzeq25

bobzeq25

    Hubble

  • *****
  • topic starter
  • Posts: 19,671
  • Joined: 27 Oct 2014

Posted 27 April 2017 - 12:42 PM

 

 

Is LRGB fundamentally better for quality, or is RGB fundamentally better for quality, assuming you're willing to be less efficient with time?

Hi Bob,

 

What metric do you want to use to quantify "fundamentally better for quality"?

 

You have to be specific on a question like this.

 

Jerry

 

Excellent question.  Not sure I have a good answer.

 

The most obvious would be a more accurate representation of the object, in both color and detail.  But more accurate is some technical sense, or more accurate to our perceptions?  It's clear, reading the PI forum, that PI sometimes does things to better match up with our perceptions. It's also clear that our perceptions can alter the technical presentation of the data.

 

Getting specific here is not easy.


Edited by bobzeq25, 27 April 2017 - 01:04 PM.


#12 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,253
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 April 2017 - 12:46 PM

Just to add some visual comparisons of RGB as synthetic L vs a real L, since I go off of what I can measure and calculate. :p This is data provided by Ken, which I thought gave a wonderful demonstration of the difference between the two options. FTR, the Real L is 900 seconds of integration, and the Synthetic L is 2700 seconds of integration:

 

Comparison 1: Independent STF stretches (in PI, this normalizes the noise profile in the background)

3Q5ZeuB.gif

 

Comparison 2: LinearFit with identical STF stretches (in PI, this uses the same black, white and midpoint for both)

XosjNTd.gif

 

Comparison 3: Manual stretch to normalize brightness and mean background levels

ktkQnyU.gif

 

Remember, this is 15 minutes of L, and 45 minutes of RGB. The 15 minutes of L is clearly surpassing the 45 minutes of RGB in terms of SNR here. In all three comparison modes. Additionally, when properly normalized, the real L channel exhibits a clear advantage in terms of standard deviation over the synthetic L:

 

real
σK = 1.171e-004, N = 6920889 (89.30%), J = 4

synthetic
σK = 6.619e-004, N = 7004605 (90.38%), J = 4

 

I go by the data, especially when the data starts to align with the math (which it definitely did in this case). I find the L filter is a very powerful tool for acquiring as much light (signal) as possible in the shortest time possible.

 

To note the one key exception that, I think it was Ken who brought it up, might affect your IQ when using an L filter. If your lens is not well corrected, the main issue being if blue light scatters too much, then the stars with an L filter could very likely bloat compared to the RGB. However, I would like to point out, this was an issue Ken thought he had...but if you look at my normalized stretch in Comparison 3 above, you should find that there is no difference in the star bloat at all. ;) (Which tells me that Ken has a well corrected scope! ;P)

 

For the original post:

 

https://www.cloudyni...data/?p=7827163


  • mikefulb, psandelle and entilza like this

#13 bobzeq25

bobzeq25

    Hubble

  • *****
  • topic starter
  • Posts: 19,671
  • Joined: 27 Oct 2014

Posted 27 April 2017 - 12:54 PM

I do LRGB and I have PI.  When the Spaniard says that LRGB is a shortcut.... I'd like some clarification as to exactly what he's saying.  Is he saying that to get the detail that applying the L-channel provides, without actually acquiring L-sub's and not processing them, that it takes more time to get that level of detail by using just RGB?  Are we discussing many more RGB sub.'s, and additional processing time to match the same level of detail?

 

If that's what he's saying..... my question would be, "Why?"  Why do that?  It sounds like he's saying luminescent is a shortcut that will yield inferior images.  Time is our most valuable asset when imaging.

 

Edit:  OK, yes, he's saying RGB only yields higher quality, but takes more time.  My time is too valuable for that.  Using luminescent is a valuable tool.

 

 

  I think that the lum is valuable for many circumstances.  I've seen additional detail with lum, that for whatever the reason, seems to outdo my rgb combo.  On the negative side, the lum master usually suffers from larger stars (even after deconvolution) and an Ha master can go a long way to improve the situation.  I'll often use Ha for the stars in the lum image if the Ha data is available.  Layers use goes a long way in PS for this purpose.  The additional detail in the lum image can then be brushed in or % layer to really bring out detail.  I can't do this with PI.  Some may disagree with this opinion, but it has worked well for me.

 

  You can open another can of worms by getting into the controversy of 2x2 vs. 1x1 binning with the rgb channels also.  I've totally dropped binning the color channels and it seems to work well...........Gunny

This seems abundantly clear re Juan's position, which was his intent.

 

"As for the LRGB vs RGB thing, just to state my opinion clear:

 

- LRGB: Good to save time. This is true as long as RGB is shoot binned; when shooting unbinned L and RGB, the savings are marginal IMO.

 

- LRGB: Bad for quality. Assuming unbinned data, an independent L does not provide more resolution. At the contrary, it may provide less resolution since it has been acquired through a much wider band pass filter.

 

- LRGB: Problems to achieve a good match between luminance and chrominance.

 

- LRGB: More limitations to work with linear data. LRGB combinations are usually performed in the CIE L*a*b* and CIE L*c*h*, which are nonlinear. It is true that a linear LRGB combination is doable in PixInsight, though, working in the CIE XYZ space.

 

- RGB: Perfect match between luminance and chrominance, by nature. No worries about luminance structures without chrominance support, and vice-versa."

 

https://pixinsight.c...msg9297#msg9297    Post #32

 

The whole thread is one of many on the PI forum on this topic worthy of a read.  It's how the disagreement surfaced for me.

 

I think his position on binning RGB is this.  If you are going to save time with LRGB, you should save the maximum amount of time by binning RGB.  Since saving time is the reason for LRGB, maximize that.

 

I'd pay good money to see Jon and Juan debate this in person.  <smile>  Preferably a series of debates at intervals, so that they can respond to each other, bring images to the table, etc.

 

I don't believe anyone can accuse either of not having given this much thought.


Edited by bobzeq25, 27 April 2017 - 01:08 PM.

  • PirateMike likes this

#14 xiando

xiando

    Fly Me to the Moon

  • *****
  • Posts: 6,503
  • Joined: 27 May 2015
  • Loc: Cloudy NEOhio

Posted 27 April 2017 - 12:56 PM

Presuming a mono camera, and that we're not concerned about light outside the visible spectrum, a luminance capture grabs all the wavelengths within the transmission band with a minimal impact on "true" intensity across the luminance filter's pass band and it does so roughly at unity gain. From the filter spectra I've looked at, there are a whole lot of RGB etc "line" filters that have far more non-linear response than a lum filter (specifically on the slopes) as well as providing more attenuation than a lum filter even in their passband, so I'd judge it's (the lum) accuracy across the wavelength region of interest to be more accurate than a synthetic luminance created from the RGB combo.

 

LRGB vs RGB...LRGB for the win.

 

Even with my OSC camera, I find a synthetic luminance layer provides me with an easier path to process than processing without, so it's not much of  a leap to think a true luminance capture would exceed that provided by a synthetic luminance layer from a mono derived trio of rgb..



#15 sharkmelley

sharkmelley

    Gemini

  • *****
  • Posts: 3,019
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 27 April 2017 - 01:40 PM

It's a very interesting question and the answer might depend on which metric of quality you are using. I haven't read the PI thread, so these thoughts are my own.

 

For a given amount of imaging time LRGB acquisition will invariably collect more photons, which one would assume must be a good thing to keep overall noise down. Certainly, if the data is processed optimally there must be lower noise in the L channel.  But LRGB acquisition will also invariably collect fewer colour differentiated photons i.e. there will fewer photons that distinguish colour and this must lead to increased chroma noise.  So the question becomes: does the lower noise L channel outweigh the greater noise chroma?  It's possibly a question of human perception and I don't pretend to know the answer.

 

Separately, Jon's example of Ken's data makes no sense to me.  15 min of L should have captured the same number of photons as 45 min of RGB. So the real and synthetic L image should look identical.  Unless, of course, read noise became an important factor in the acquisition of the separate R,G & B channels.  In our debating I think we ought to assume that the data has been acquired in such a manner that read noise is not a factor.

 

Mark

 

Disclaimer:  I'm exclusively a DSLR imager so I have no practical experience of LRGB at all.


Edited by sharkmelley, 27 April 2017 - 01:44 PM.


#16 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,253
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 April 2017 - 02:05 PM

Separately, Jon's example of Ken's data makes no sense to me.  15 min of L should have captured the same number of photons as 45 min of RGB. So the real and synthetic L image should look identical.  Unless, of course, read noise became an important factor in the acquisition of the separate R,G & B channels.  In our debating I think we ought to assume that the data has been acquired in such a manner that read noise is not a factor.

I believe the discrepancy here is the filters have LP gaps. While the lack of LP gaps will certainly let in more LP, it will also let in more object signal (especially from broadband objects like galaxies). So while the total exposure times from a full-spectrum standpoint are technically the same, those gaps in the RGB filters are likely hurting it here. Since most LRGB filter sets include gaps these days, I think this is more normal than not. In particular, the gap with AstroDon E-series G2 LRGB filters is quite large, and the bandpass of the R filter tapers slightly, and on top of that the L filter extends just a little deeper into the reds than the red filter itself. So, there is definitely going to be more total light per unit time with the L than the RGB. Beyond that, there may be nuances to the RGB filters that we don't know about and can't properly account for on an analysis. 


Edited by Jon Rista, 27 April 2017 - 02:06 PM.


#17 Peter in Reno

Peter in Reno

    Voyager 1

  • *****
  • Posts: 10,781
  • Joined: 15 Jul 2008
  • Loc: Reno, NV

Posted 27 April 2017 - 02:08 PM

To note the one key exception that, I think it was Ken who brought it up, might affect your IQ when using an L filter. If your lens is not well corrected, the main issue being if blue light scatters too much, then the stars with an L filter could very likely bloat compared to the RGB. However, I would like to point out, this was an issue Ken thought he had...but if you look at my normalized stretch in Comparison 3 above, you should find that there is no difference in the star bloat at all. wink.gif (Which tells me that Ken has a well corrected scope! ;P)

Compound scopes (multiple mirrors) are not affected by chromatic aberration. Tak APOs are probably the only refractors that have the best color correction especially at the low end of blue spectrum.

 

I have often suggested to others to try out pseudo-Lum and usually get negative feedbacks about it because some people could not tell the difference but I can tell for my images. My main reason for bringing this out is people often don't have real-Lum and adding psuedo-Lum to RGB is better than nothing.

 

In my case, for a while I never imaged with Lum filter for one main reason. My TEC 140 was not correcting blue spectrum below 438nm which resulted big blue star bloats and my Sony ICX-694 is very blue-violet sensitive and that can make some bright and big blue stars go out of focused. I was imaging without Field Flattener (FF) or Reducer because TEC 140 has a pretty large flat field for my smallish Sony ICX-694 CCD.

 

Recently I discovered TEC FF actually helps correct blue spectrum and I am getting very good results using Astrodon E-Series RGB filters especially for Blue filter but have not yet tried with Luminance thanks to many months of bad weather. I look forward trying out using the following total exposure times:

 

Lum: 180 minutes

R: 60 minutes

G: 60 minutes

B: 60 minutes

 

This is a total of 360 minutes or 6 hours unbinned. In the past I have been imaging only with RGB filters using 120 minutes per filter which is also total of 360 minutes and also unbinned. I also use pseudo-Lum as well.

 

If your camera like KAF-8300 has low blue spectrum sensitivity, then your APO scope may not be affected by chromatic aberration.

 

Peter


  • Dereksc likes this

#18 Alex McConahay

Alex McConahay

    Cosmos

  • *****
  • Posts: 8,927
  • Joined: 11 Aug 2008
  • Loc: Moreno Valley, CA

Posted 27 April 2017 - 02:13 PM

Just my opinion, but this thread really should be where it started, back in CCD imaging. I do not see how it helps Beginning and Intermediate Imagers. As Bob pointed out in starting it, the big guys have not decided on an answer. All this discussion can do for beginning and intermediate imagers is bring up lots of things to worry about that do not really improve their imaging.

 

Alex



#19 sharkmelley

sharkmelley

    Gemini

  • *****
  • Posts: 3,019
  • Joined: 19 Feb 2013
  • Loc: UK

Posted 27 April 2017 - 02:15 PM

 

Separately, Jon's example of Ken's data makes no sense to me.  15 min of L should have captured the same number of photons as 45 min of RGB. So the real and synthetic L image should look identical.  Unless, of course, read noise became an important factor in the acquisition of the separate R,G & B channels.  In our debating I think we ought to assume that the data has been acquired in such a manner that read noise is not a factor.

I believe the discrepancy here is the filters have LP gaps. While the lack of LP gaps will certainly let in more LP, it will also let in more object signal (especially from broadband objects like galaxies). So while the total exposure times from a full-spectrum standpoint are technically the same, those gaps in the RGB filters are likely hurting it here. Since most LRGB filter sets include gaps these days, I think this is more normal than not. In particular, the gap with AstroDon E-series G2 LRGB filters is quite large, and the bandpass of the R filter tapers slightly, and on top of that the L filter extends just a little deeper into the reds than the red filter itself. So, there is definitely going to be more total light per unit time with the L than the RGB. Beyond that, there may be nuances to the RGB filters that we don't know about and can't properly account for on an analysis. 

 

Thanks for the explanation.  It adds yet another interesting dimension to the problem wink.gif

 

Mark


Edited by sharkmelley, 27 April 2017 - 02:16 PM.


#20 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,253
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 April 2017 - 02:32 PM

 

 

Separately, Jon's example of Ken's data makes no sense to me.  15 min of L should have captured the same number of photons as 45 min of RGB. So the real and synthetic L image should look identical.  Unless, of course, read noise became an important factor in the acquisition of the separate R,G & B channels.  In our debating I think we ought to assume that the data has been acquired in such a manner that read noise is not a factor.

I believe the discrepancy here is the filters have LP gaps. While the lack of LP gaps will certainly let in more LP, it will also let in more object signal (especially from broadband objects like galaxies). So while the total exposure times from a full-spectrum standpoint are technically the same, those gaps in the RGB filters are likely hurting it here. Since most LRGB filter sets include gaps these days, I think this is more normal than not. In particular, the gap with AstroDon E-series G2 LRGB filters is quite large, and the bandpass of the R filter tapers slightly, and on top of that the L filter extends just a little deeper into the reds than the red filter itself. So, there is definitely going to be more total light per unit time with the L than the RGB. Beyond that, there may be nuances to the RGB filters that we don't know about and can't properly account for on an analysis. 

 

Thanks for the explanation.  It adds yet another interesting dimension to the problem wink.gif

 

Mark

 

Yeah, very true.

 

The thing that stands out most to me though is, assuming there were no gaps and the total bandpass was identical between the L and the RGB, is it took 15 minutes to get the signal with L, and 45 minutes with RGB. That's a huge difference in time. 

 

One thing I have not tested sufficiently is, what is "sufficient" RGB integration? Frank brought up an interesting point in the original CCD forum thread about color accuracy. I guess I've always thought that as long as the relative differences between the color channels was maintained, then using L (of any form) shouldn't hurt...but I guess I've never actually verified it one way or the other. 


  • PirateMike likes this

#21 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,253
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 April 2017 - 02:34 PM

 

To note the one key exception that, I think it was Ken who brought it up, might affect your IQ when using an L filter. If your lens is not well corrected, the main issue being if blue light scatters too much, then the stars with an L filter could very likely bloat compared to the RGB. However, I would like to point out, this was an issue Ken thought he had...but if you look at my normalized stretch in Comparison 3 above, you should find that there is no difference in the star bloat at all. wink.gif (Which tells me that Ken has a well corrected scope! ;P)

Compound scopes (multiple mirrors) are not affected by chromatic aberration. Tak APOs are probably the only refractors that have the best color correction especially at the low end of blue spectrum.

Just a side note here...but I've found that Canon L-series supertelephoto lenses are extremely well corrected as well. I don't think they will deliver stars quite as nice as a Tak, but from a correction standpoint, they are phenomenal. This is one of the reasons I've stuck with my Canon lens for so long. I'd love a Tak FSQ, or a TEC140...maybe someday. It sounds like both are pretty well corrected lenses.



#22 Jerry Lodriguss

Jerry Lodriguss

    Vendor

  • *****
  • Vendors
  • Posts: 7,387
  • Joined: 19 Jul 2008
  • Loc: Voorhees, NJ

Posted 27 April 2017 - 02:45 PM

 

 

 

Is LRGB fundamentally better for quality, or is RGB fundamentally better for quality, assuming you're willing to be less efficient with time?

Hi Bob,

 

What metric do you want to use to quantify "fundamentally better for quality"?

 

You have to be specific on a question like this.

 

Jerry

 

Excellent question.  Not sure I have a good answer.

 

The most obvious would be a more accurate representation of the object, in both color and detail.  But more accurate is some technical sense, or more accurate to our perceptions?  It's clear, reading the PI forum, that PI sometimes does things to better match up with our perceptions. It's also clear that our perceptions can alter the technical presentation of the data.

 

Getting specific here is not easy.

 

If you can't specify how you are going to quantify it, then how could you possibly decide?  Personal opinions?

 

If you can't quantify it, you're not going to get a real answer to this question.

 

Jerry


  • PirateMike likes this

#23 bmhjr

bmhjr

    Apollo

  • ****-
  • Posts: 1,369
  • Joined: 02 Oct 2015
  • Loc: Colorado

Posted 27 April 2017 - 02:50 PM

My ignorance aside, what is difference in  RGB + synthetic L vs. OSC + synthetic L.  If synthetic L really has a benefit, shouldnt that be standard procedure with OSC as well?



#24 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,253
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 27 April 2017 - 03:52 PM

My ignorance aside, what is difference in  RGB + synthetic L vs. OSC + synthetic L.  If synthetic L really has a benefit, shouldnt that be standard procedure with OSC as well?

I think this is the real question...many of us, including myself, question whether synthetic L derived from the RGB/OSC data provides any intrinsic benefit. I think there may be extrinsic benefits...making it easier for the imager to process the L and enhance details. But from just a data/SNR standpoint, it's the same information...you aren't adding anything new, so there is no way it could be improving SNR vs. just using the RGB/OSC data itself. 

 

Many people, including myself, find that processing an L channel (real or synthetic) for detail is easier than trying to see and improve those same details in just the color data...perhaps the color information itself gets in the way of our perception of the details, which when all you have is light and dark and stark contrasts in the L channel, it's easier to see and enhance. But I never found any real improvement in SNR when using synthetic L with any OSC data. 


  • mikefulb, bmhjr and the Elf like this

#25 Peter in Reno

Peter in Reno

    Voyager 1

  • *****
  • Posts: 10,781
  • Joined: 15 Jul 2008
  • Loc: Reno, NV

Posted 27 April 2017 - 03:56 PM

If synthetic Lum helps enhance details, then why is SNR so important? This is for peopke who don't (not yet) have real Lum data.

 

Peter




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics