Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

A fresh take on synthetic RGB from narrowband

This topic has been archived. This means that you cannot reply to this topic.
24 replies to this topic

#1 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 20 May 2019 - 09:57 AM

(Edited to reflect points in the discussion)

 

Short version: I derived a physically-based formula to synthesize RGB "true color" images from narrowband Hydrogen-alpha (Ha), OIII, and SII filters:

 

R = 0.5*Ha + 0.5*SII
G = 0.5*OIII + 0.094*Ha
B = 0.295*OIII + 0.1*Ha

 

 

And, I like how it looks. Here's "Frank's Special Blend" on 4 hours of the Eagle Nebula (M16), side by side with a more typical HOO->RGB mapping. Both were processed identically apart from the PixelMath equation used in PixInsight:

 

FranksBlendHOO-Take3.jpg

 

 

Here's my full image on Astrobin.

 

Long version:

 

So, here's how I arrived at these numbers. Please check my assumptions and math; I'm still pretty new to astro-imaging and I am definitely not an astrophysicist, so if someone else already arrived at these same numbers or if I'm doing something stupid, just tell me.

 

Looking at spectra of emission nebula yields a couple of insights:

 

  • Hydrogen-beta emissions are generally about 20% of Hydrogen-alpha emissions. I think it makes sense that this ratio remains fairly constant; I mean, it's the same element just excited a couple of different ways. But these are very different wavelengths to our eyes. That's nothing new; I've seen others use this as justification for blending some Hydrogen-alpha into your blue channel (credit to Richard Wright for enlightening me on the physical basis of this, though.)
  • Similarly, OIII emissions are actually (mostly) on two emission lines, with the secondary one being about 39% of the primary in strength.

 

So, we can synthesize Hydrogen-beta emissions if we have Hydrogen-alpha, and we can also get a couple of different colors from OIII using a similar thought process.

 

The wavelengths we have to work with are:

 

Hydrogen-alpha: 656 nm

OIII: 501 nm (primary) and 496 nm (secondary) - although these numbers are very close, they're quite distinct colors to our eyes.

SII: 672 nm

Hydrogen-beta: 486 nm

 

Now, we can convert those wavelengths to RGB values.

 

Hydrogen-alpha and SII are both straight up red to our eyes, or an RGB color of (1.0, 0.0, 0.0)

The primary OIII line converts to the RGB values (0, 1.0, 0.53), and the secondary is (0, 1.0, 0.75).

Hydrogen-beta converts to (0, 0.94, 1.0)

 

So we end up with:

 

R = Ha + SII  (both Ha and SII emissions are pure red to our eyes)

G = 0.72*OIII + 0.28*OIII + 0.2*0.94*Ha  (this is the green contribution from the primary and secondary lines of OIII, weighted by their relative strength as they are both captured by an OIII filter, and the contribution of Hydrogen-beta, which has a G of 0.94 and further is 20% of Hydrogen-alpha)

B = 0.72*0.53*OIII + 0.28*0.75*OIII + 0.2*Ha (again, this is the blue values from OIII, weighted by their contribution to the total OIII signal, and adding in the synthesized Hydrogen-beta data as 20% of Hydrogen-alpha.)

 

Simplified, this is the same as:

 

R = Ha + SII
G = OIII + 0.188*Ha
B = 0.592*OIII + 0.2*Ha

 

But, if some stars are near saturation in the linear images you're combining, this could result in them getting clipped. We need to scale everything down 50% to prevent that:

 

R = 0.5*Ha + 0.5*SII
G = 0.5*OIII + 0.094*Ha
B = 0.295*OIII + 0.1*Ha

 

In theory this only works on emission nebula, and furthermore on emission nebula that are excited mostly in Hydrogen, Oxygen, and Sulfur. Some nebulae have strong Nitrogen components too and I didn't factor that in.

 

The stars with this look tight and still have some color to them, which is good. But I think I'll take some real RGB data just for the stars on this image later on - I don't think you can really capture most stars' true color with NB filters.

 

So, how real is it? Well, we can compare it to a RGB image from the ESO's La Silla Observatory:

 

eso0926a.jpg

 

Image credit: ESO

 

So, reality seems to lie somewhere in between my formula and HOO. But maybe it's just the impact of light pollution on my raw data... I took my image at the start of this post from a "red zone" on the night before a full moon! That probably did not-so-great things to the quality of my OIII data. It will be interesting to see how it works under better conditions.

 

However, I think it's pretty darn exciting that a guy in a suburban driveway with a 7-inch telescope can produce an image that holds its own against a 2.2-meter telescope in the Atacama desert using narrowband techniques.

 

Hope this proves useful to others, or launches you on your own exploration for the perfect color mapping.

 

-Frank Kane

Boldly Going


Edited by BoldlyGoing, 21 May 2019 - 07:11 AM.


#2 OldManSky

OldManSky

    Fly Me to the Moon

  • *****
  • Posts: 6,616
  • Joined: 03 Jan 2019

Posted 20 May 2019 - 10:43 AM

Interesting result, Frank, and I thank you for exploring the possibilities. :)



#3 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 20 May 2019 - 10:55 AM

Interesting stuff!

 

You definitely brought out the OIII more, you can see the cloud of it extending past just the core in your "Frankified" version, which I believe is indeed real. I myself usually blend some Ha into blue to account for Hb.

 

I am curious, though...why you went beyond a maximum combined level of 1.0 in the green channel. You have 1.3x OIII, and 0.188x Ha. Since those are added, your values effectively come out to at peak 1.488, almost 50% more than, the maximum level of 1.0 in PI's floating point range. I think this has two consequences.

 

1) You can tell that it saturated more stars in the Frank version, and for stars it did not saturated they are more intense.

2) It added a green cast to everything.

 

I have a similar question about simply adding Ha and SII together without some kind of scaling to normalize them within the 0-1 range, as that will have a similar result of pushing values beyond the 1.0 maximum float level in PI for stars. 


Edited by Jon Rista, 20 May 2019 - 10:56 AM.


#4 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 36,196
  • Joined: 27 Oct 2014

Posted 20 May 2019 - 11:52 AM

I agree that something in between your two would be good, the first is a bit salmon pink for my taste, the second does not show the reflection component as well.

 

I prefer either to the professional image.  Among other things, there's the artifacts around bright stars.

 

I defy anyone to say which is more "realistic", since, even if we could travel to a point with that perspective, there's no way the nebula appears to our eyes as any of those three.  Mostly shades of gray, with perhaps a red cast, would be my guess (and it's just that).

 

I agree with Jon that it's a bit too green.  I wonder what it would look like, if, at the end, you simply dialed the green down a bit.



#5 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 20 May 2019 - 11:59 AM

I agree that something in between your two would be good, the first is a bit salmon pink for my taste, the second does not show the reflection component as well.

There are no reflection nebula shown in NB images. The blue channel is OIII emissions. The reflection component would only be picked up by RGB imaging. The OIII component is definitely GAS though, a cloud of oxygen gas that is diffuse, not nearly as defined as the stronger and more structured hydrogen and sulfur. 
 
 
 

I defy anyone to say which is more "realistic", since, even if we could travel to a point with that perspective, there's no way the nebula appears to our eyes as any of those three.  Mostly shades of gray, with perhaps a red cast, would be my guess (and it's just that).

 
And I will absolutely say the ESO broad spectrum image is more realistic. This whole notion that because the human eye couldn't see it this way means it doesn't exist is striat up cow manure! This stuff is not difficult. It is light, emitted by stars and gasses, at wavelengths we can detect, measure, and analyze. It doesn't matter if the weak puny human eye couldn't see this. A digital detector IS a detector, and it SEES just the same...it just sees better. Now, how much bias has been introduced into the ESO image, and how that may have skewed colors from the actual wavelengths being emitted, I cannot say...however, the ESO broad spectrum image is definitely picking up information that is not capable of being picked up with narrow band imaging...such as the aforementioned reflections off the gas and dust. 
 
So, defy all you want...but this stuff boils down to physics and the nature of light. It doesn't matter how each individual perceives, or that the human eye is too weak to capture enough photons to see this as depicted here. That does not change the fact that light IS being emitted, across the entire visible spectrum, from both narrow band emission gasses as well as stars as well as reflections off of gas and dust, not to mention the absorption and reemission factors of the dust itself! The wavelength range of the visible spectrum is well defined, the names we have given to each range of wavelengths are well defined, our detectors and filters are capable of separating various wavelengths, and we the processes necessary to reconstruct a full color image using that information are well understood. The ESO image is more accurate and more realistic. It contains far more information than a narrow band image can.

Now, the OP's image is still excellent, and the fact that the ESO image represents a much broader range of information does not invalidate that, but the fact remains, the ESO image represents a much broader range of information.

Edited by Jon Rista, 20 May 2019 - 12:07 PM.


#6 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 20 May 2019 - 12:01 PM

Interesting stuff!

 

You definitely brought out the OIII more, you can see the cloud of it extending past just the core in your "Frankified" version, which I believe is indeed real. I myself usually blend some Ha into blue to account for Hb.

 

I am curious, though...why you went beyond a maximum combined level of 1.0 in the green channel. You have 1.3x OIII, and 0.188x Ha. Since those are added, your values effectively come out to at peak 1.488, almost 50% more than, the maximum level of 1.0 in PI's floating point range. I think this has two consequences.

 

1) You can tell that it saturated more stars in the Frank version, and for stars it did not saturated they are more intense.

2) It added a green cast to everything.

 

I have a similar question about simply adding Ha and SII together without some kind of scaling to normalize them within the 0-1 range, as that will have a similar result of pushing values beyond the 1.0 maximum float level in PI for stars. 

Yeah, I just went where the math led me, but didn't think about stars getting saturated even in linear space. The right thing to do would be to scale every coefficient by half to protect against that:

 

R = 0.5*Ha + 0.5*S2

G = 0.65*OIII + 0.094*Ha

B = 0.3775*OIII + 0.1*Ha

 

Star colors in narrowband are kind of a dodgy thing to begin with, but no point in losing data if we don't have to.



#7 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 20 May 2019 - 12:04 PM

I agree with Jon that it's a bit too green.  I wonder what it would look like, if, at the end, you simply dialed the green down a bit.

I think that's just an artifact of the extreme light pollution the OIII was gathered under. Normally I would tweak that a bit, but I didn't want to introduce anything subjective for the purpose of this post.

I would be very curious to see how this math works out for people who have better data to work with.

 

[Edit: much of the extra green turned out to be a math error, that's since been corrected above.]


Edited by BoldlyGoing, 21 May 2019 - 07:03 AM.


#8 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 20 May 2019 - 12:11 PM

Yeah, I just went where the math led me, but didn't think about stars getting saturated even in linear space. The right thing to do would be to scale every coefficient by half to protect against that:
 
R = 0.5*Ha + 0.5*S2
G = 0.65*OIII + 0.094*Ha
B = 0.3775*OIII + 0.1*Ha
 
Star colors in narrowband are kind of a dodgy thing to begin with, but no point in losing data if we don't have to.


So, here you have also scaled down the blue channel contributions, and that won't actually change the *relative* distributions of signal in blue and green vs. red, so the green cast will likely remain here. I definitely think that OIII contribution to blue needs to be higher than ~40%...much higher. I think the update to the red channel is better. If you want to get more accurate, you may want to scale Ha and SII relative to the differences in the quantum efficiency of your detector (Q.E. at the SII line is usually lower than at the Ha line, sometimes by enough of a margin that compensating for it can help SII structure.)


Edited by Jon Rista, 20 May 2019 - 12:12 PM.


#9 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,796
  • Joined: 12 Oct 2007

Posted 20 May 2019 - 01:17 PM

get.jpg?insecure

The method described here is how I have been doing narrowband color for several years - but without sii. Here is an example and you can see the description in the text.

The key difference is that I allow some scaling of each filter because the aren’t calibrated in the first place. But the ratios for each line are fixed and based on perceptual rgb ratios.

I avoid calling it “true” color and instead it is “perceptual” and deterministic.

So the ha channel itself is mapped to rgb based on response to the ha wavelength - along with rgb for a synthetic Hb line at I think 30%. And the oiii channel is mapped to perceptual rgb also. Then I combine them all linearly but with scaling applied to oiii to provide a somewhat balanced output. For some nebulae the result would otherwise be too green or purple. But I scale the rgb for oiii and not simply the g or b.

That means the colors aren’t just arbitrary and can be described by a single aesthetic choice - which is the scaling done on the oiii channel. The rest is all data driven and based on perceptual rgb values.

I don’t think it’s a good assumption that the response to the ha channel is the same as oiii without doing some kind of calibration. But the overall look will be more consistent and less arbitrary.

The main thing is that for me the colors are more deterministic and data driven. And for bicolor they can be specified by a single number, which is the scaling done on the oiii rgb relative to the ha/hb combined rgb.

Other Frank

Edited by freestar8n, 20 May 2019 - 01:27 PM.


#10 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 20 May 2019 - 01:38 PM

get.jpg?insecure

The method described here is how I have been doing narrowband color for several years - but without sii. Here is an example and you can see the description in the text.

The key difference is that I allow some scaling of each filter because the aren’t calibrated in the first place. But the ratios for each line are fixed and based on perceptual rgb ratios.

I avoid calling it “true” color and instead it is “perceptual” and deterministic.

So the ha channel itself is mapped to rgb based on response to the ha wavelength - along with rgb for a synthetic Hb line at I think 30%. And the oiii channel is mapped to perceptual rgb also. Then I combine them all linearly but with scaling applied to oiii to provide a somewhat balanced output. For some nebulae the result would otherwise be too green or purple. But I scale the rgb for oiii and not simply the g or b.

That means the colors aren’t just arbitrary and can be described by a single aesthetic choice - which is the scaling done on the oiii channel. The rest is all data driven and based on perceptual rgb values.

I don’t think it’s a good assumption that the response to the ha channel is the same as oiii without doing some kind of calibration. But the overall look will be more consistent and less arbitrary.

The main thing is that for me the colors are more deterministic and data driven. And for bicolor they can be specified by a single number, which is the scaling done on the oiii rgb relative to the ha/hb combined rgb.

Other Frank

Thanks - you're right, there's a lot going on between the light emitted from an object and what ultimately ends up in your initial set of integrated images for each channel. In my image it's pretty clear that OIII is stronger than it should be, and I shouldn't feel dirty by scaling the raw inputs as needed. But I too like starting from something with some sort of deterministic physical basis.

Franks of the world unite smile.gif


Edited by BoldlyGoing, 20 May 2019 - 01:40 PM.


#11 H-Alfa

H-Alfa

    Viking 1

  • -----
  • Posts: 842
  • Joined: 21 Sep 2006

Posted 20 May 2019 - 02:45 PM

Interesting approach... I specially like the stars colors.
I will try it in my next attempt in narrowband. Thank for sharing! :)

Enviado desde mi ANE-LX1 mediante Tapatalk

#12 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,796
  • Joined: 12 Oct 2007

Posted 20 May 2019 - 05:59 PM

Thanks - you're right, there's a lot going on between the light emitted from an object and what ultimately ends up in your initial set of integrated images for each channel. In my image it's pretty clear that OIII is stronger than it should be, and I shouldn't feel dirty by scaling the raw inputs as needed. But I too like starting from something with some sort of deterministic physical basis.

Franks of the world unite smile.gif

For me there's no need to feel dirty because ultimately my goal is to let the object reveal itself - and minimize the aesthetic choices made for it to be seen.   And express those choices in as few parameters as possible.

 

If you compare your colors to mine - they are very similar - because they are tied directly to the data and to a perceptual model.

 

In my case the values I use are:

 

HaR = 1.0

 

HBG = 239.0/255.0

HBB = 1.0

 

HBRatio  = 0.3

 

OiiiG = 1.0

OiiiB = 135.0/255.0

 

What this says is that Ha is perceived as pure red, while HBeta is perceived as 0.94 Green and 1.0 Blue.

 

And Oiii is perceived as 1.0 Green and 0.53 Blue.

 

Then I assume the ratio of HB to Ha is 0.3 (though it can vary between and within nebulae)

 

And I give some scaling for Oiii relative to Ha to allow both lines to be seen.  If Oiii completely dominates Ha then it won't be visible at all.  So instead I scale it - and then state the scaling.  For the example above it is unscaled - but for NGC2020 I needed to scale Oiii by a factor of 0.3:

 

get.jpg?insecure

 

The overall color scheme is the same because it is set by the eye's response to Ha, HB and Oiii - but if you look at it knowing the factor of Oiii, you can say - ok the object on the left has much stronger Oiii emission than on the right - but overall the scene is dominated by Oiii.

 

The only aesthetic choice in terms of color is a single, stated number: 0.3.

 

Once the colors are set by a linear combination - I then stretch it to reveal the faint detail - and then I am done.

 

This is not for everyone and there is nothing wrong with doing arbitrary manipulation of the r g b for some pleasing look if that's what you want.  But it's not the only way to do narrowband - and I like the consistency and interpretability of this approach.  And it's great to see someone else do narrow band and end up with a very similar look - by doing a similar data driven and model based approach.

 

Sometimes people say they like my colors and others don't.  But for me it isn't really a choice I made - it just comes out of the data.

 

There is a separate question of what the HB/Ha ratio is - and I did a study of that here with the tarantula nebula:

 

https://www.cloudyni...mg-oag-cge-pro/

 

In that case I used both H-alpha and H-beta filters - and measured the signal ratio.  It varies quite a bit within the tarantula nebula.  So you know that 0.3 or 0.2 won't be exactly right.  But unless you use and H-Beta filter, all you can do is guess.  So go ahead and guess - and state the value you used.  Which is what you did with 0.2 and I did with 0.3.  Either way - it's data driven and model based - which I personally really appreciate - so you can see the object not the imager.  And you can compare one object to another in a quantitative way.

 

I haven't done this with full tricolor, including Sii, but what you did makes sense and looks good.  It just won't have as much impact because it is pure red - but you can see how parts of a nebula vary from deep red to purple.

 

Oh - and if your Ha filter isn't 3nm then you will be blending in Nii - which will skew the amount of HB being blended in.  Again there is nothing you can do about it - so just say what you did and why.  In my case I use 3nm so I know it is just the Ha signal.

Frank



#13 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 21 May 2019 - 07:02 AM

This discussion made me realize that adding together the two spikes from OIII isn't the right thing to do, since an OIII filter picks up the energy from both of them. I modified my formulae to treat them as a weighted average instead. This had the effect of getting rid of a lot of that extra green cast that people picked up on! This is really exciting - as we get closer to how the real physics work, the image gets visibly better.

 

I updated the original post with the new results and a new side-by-side image. The Astrobin link has also been updated to the new version.


Edited by BoldlyGoing, 21 May 2019 - 07:11 AM.


#14 scopenitout

scopenitout

    Surveyor 1

  • -----
  • Posts: 1,720
  • Joined: 24 Aug 2013

Posted 21 May 2019 - 09:37 AM

I'm going to try Boldly's formula on the next chance I get.

But... the ESO (RGB) shot has seriously enviable color. Too bad the brighter stars have some vicious halos. Are they using the ASI1600 and cheapo filters ;-)

#15 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 21 May 2019 - 10:47 AM

 

Oh - and if your Ha filter isn't 3nm then you will be blending in Nii - which will skew the amount of HB being blended in.  Again there is nothing you can do about it - so just say what you did and why.  In my case I use 3nm so I know it is just the Ha signal.

 

Yeah, that effect could end up being huge on some objects. In the ring nebula / M57 for example, there's more red contributed from NII than Ha, and depending on the bandwidth of your filter you could get some or none of that data.

 

Another interesting find is this other image I stumbled across of the Eagle done in RGB: http://cs.astronomy....lae/492246.aspx

 

Pretty darn close to the synthetic result. Makes me wonder if the ESO's image actually could be replicated if it were taken in narrowband using the same optics and conditions.



#16 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 36,196
  • Joined: 27 Oct 2014

Posted 21 May 2019 - 11:22 AM

Yeah, that effect could end up being huge on some objects. In the ring nebula / M57 for example, there's more red contributed from NII than Ha, and depending on the bandwidth of your filter you could get some or none of that data.

 

Another interesting find is this other image I stumbled across of the Eagle done in RGB: http://cs.astronomy....lae/492246.aspx

 

Pretty darn close to the synthetic result. Makes me wonder if the ESO's image actually could be replicated if it were taken in narrowband using the same optics and conditions.

I don't think so, not precisely.  A great deal of the spectrum is wiped out in narrowband.  You could manipulate what's left of the data to resemble it.  How people image stuff with a CLS. 

 

Astronomical objects emit radiation in complicated ways.   Mapping that to our (often very different between individuals) eyes and brains is a slippery business.

 

You're doing well at it.

 

Re that Eagle.  A bit salmon pink for my taste, particularly the outer reaches where you would expect very red Ha to dominate.


Edited by bobzeq25, 21 May 2019 - 11:27 AM.


#17 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 21 May 2019 - 11:29 AM

I suspect the salmon hue ultimately comes from LP and my lack of skill in background extraction. I hope to see the results others get with this approach, with better data and technique.



#18 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,796
  • Joined: 12 Oct 2007

Posted 21 May 2019 - 05:06 PM

This discussion made me realize that adding together the two spikes from OIII isn't the right thing to do, since an OIII filter picks up the energy from both of them. I modified my formulae to treat them as a weighted average instead. This had the effect of getting rid of a lot of that extra green cast that people picked up on! This is really exciting - as we get closer to how the real physics work, the image gets visibly better.

 

I updated the original post with the new results and a new side-by-side image. The Astrobin link has also been updated to the new version.

If you don't like the amount of green you can always scale the Oiii - because you haven't calibrated it in the first place.  The QE is different and the system transmission is different - among other things.  I think it would be hard to calibrate the response to the narrowband filters without a calibrated reference - but stars may do ok as references if you know the spectra are well behaved.

 

If you don't scale you are just assuming the response of Ha and Oiii is the same.  Instead you can say - I don't know what it is exactly - but I used this scaling and it looks right.

 

Frank



#19 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 22 May 2019 - 03:50 PM

This discussion made me realize that adding together the two spikes from OIII isn't the right thing to do, since an OIII filter picks up the energy from both of them. I modified my formulae to treat them as a weighted average instead. This had the effect of getting rid of a lot of that extra green cast that people picked up on! This is really exciting - as we get closer to how the real physics work, the image gets visibly better.

 

I updated the original post with the new results and a new side-by-side image. The Astrobin link has also been updated to the new version.

New version does appear to be an improvement. No longer seeing any green cast.



#20 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,796
  • Joined: 12 Oct 2007

Posted 22 May 2019 - 05:59 PM

I suspect the salmon hue ultimately comes from LP and my lack of skill in background extraction. I hope to see the results others get with this approach, with better data and technique.

If the values shown in your astrobin page are up to date - they appear to be in line with what I have been using for some years - except for 0.2 vs. 0.3 for the HB multiplier.

 

Yours are:

 

R = 0.5*Ha + 0.5*SII
G = 0.55*OIII + 0.094*Ha
B = 0.295*OIII + 0.1*Ha

 

and expressed the same way, mine are (with Sii added as pure red):

 

R = 0.5*Ha + 0.5*SII
G = 0.5*OIII + 0.14*Ha
B = 0.26*OIII + 0.15*Ha

 

This is the linear sum of *perceived* contributions assuming HB is 0.3 Ha - and assuming all lines are received equally.  Which probably isn't the case since response to Oiii is probably better than Ha or Sii.  But it's a good guess.

 

And there is no reason for any of this to match an OSC or RGB image since this only accounts for a small number of specific lines - and ignores Nii, Ar - and of course reflection nebulosity.  It is an attempt to represent the perceived appearance of the specific subset of emission lines - under certain assumptions.

 

Frank


Edited by freestar8n, 22 May 2019 - 06:01 PM.


#21 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 23 May 2019 - 11:47 AM

If the values shown in your astrobin page are up to date - they appear to be in line with what I have been using for some years - except for 0.2 vs. 0.3 for the HB multiplier.

 

It's awesome that we independently arrived at similar results. And agreed we can't hope to 100% replicate a real RGB image this way, but I still find the results exciting.

 

Based on your observations I've added a symbol in PixelMath to allow variation of the O3 ratio to make it a little easier to tweak. Thanks again for your insights.



#22 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 23 May 2019 - 12:03 PM

It's awesome that we independently arrived at similar results. And agreed we can't hope to 100% replicate a real RGB image this way, but I still find the results exciting.

 

Based on your observations I've added a symbol in PixelMath to allow variation of the O3 ratio to make it a little easier to tweak. Thanks again for your insights.

I agree, I find it exciting to blend in a manner that is true to the real-world emission levels of the various different bands involved. There is a certain aspect of personal discovery involved, where you learn just how hydrogen and sulfur and oxygen (and maybe even nitrogen, if you can use a 3nm NII filter) compare relative to each other in a realistic sense. 

 

When it comes to Hb and OIII, something you could also try to account for, if you can find any useful information about it (or perhaps just with some educated guesswork) is the extinction factor that interstellar dust between earth and the object may have on the bluer end of the spectrum...and scale your Hb and OIII contributions accordingly. I usually blend around 5-20% for Hb myself...and exactly how much depends on two factors: Whether the color I get at 20% even remotely appears realistic (if it results in a stronger purpler color, then I usually scale it back), and how much dust appears to exist between the object and Earth. In Cygnus, for example, there is a lot of dust around there, but how much may scatter blue light also depends on where in Cygnus you are imaging, so it can be interesting fiddling around with Ha scaling in the blue channel until you find a result that feels like it is modeling reality appropriately. 

 

One thing I have been experimenting with for a while now is a linear alignment to align the channels, rather than a linear fit. I feel a linear fit may force some changes to the aligned channels that is not necessarily appropriate, due to the way it will redistribute values based on the linear fit between the target and reference channel. A linear alignment simply shifts every pixel by the same constant. I use the following in PixelMath to do the alignment:

 

RGB/K: $T + (median(<ref_bg_crop>) - median($T_bg_crop))

 

This assumes the images are registered to each other. It also assumes that you are identifying an area of "true background" signal, basically an area of the darkest signal that is the darkest in all of the channels you are aligning in a preview replicated to each channel. You then extract those previews into images, and use them as references in the pixel math formula. This aligns the images on median of that region of "true background", and does so without redistributing the signals in any way (which can break the true natural differences in the signals.) 



#23 BoldlyGoing

BoldlyGoing

    Explorer 1

  • -----
  • topic starter
  • Posts: 56
  • Joined: 20 Jan 2019

Posted 23 May 2019 - 01:08 PM

One thing I have been experimenting with for a while now is a linear alignment to align the channels, rather than a linear fit. I feel a linear fit may force some changes to the aligned channels that is not necessarily appropriate, due to the way it will redistribute values based on the linear fit between the target and reference channel. A linear alignment simply shifts every pixel by the same constant. 

I've avoided linear fit for the same reason. For the image above, I just used background extraction followed by background neutralization as an attempt to get the right balance. Your approach sounds a lot more principled though. I'll have to internalize that; I'm still fairly new to PixInsight. Looking at your Astrobin you're clearly doing something right! Your narrowband Rosette is stunning.



#24 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 26,034
  • Joined: 10 Jan 2014

Posted 23 May 2019 - 01:36 PM

I've avoided linear fit for the same reason. For the image above, I just used background extraction followed by background neutralization as an attempt to get the right balance. Your approach sounds a lot more principled though. I'll have to internalize that; I'm still fairly new to PixInsight. Looking at your Astrobin you're clearly doing something right! Your narrowband Rosette is stunning.

Yeah, have a crack at linear alignment. There are probably ways to do that even better, it is still largely just an experimental thing (which actually came out of my sensor analysis procedures, as I was trying to find ways to get all the various sensors I was analyzing properly normalized, and that formula was part of how I did it.) I started out just using the mean or median of the whole image, but that sometimes does not work well. So I started using a crop from previews around the same dark area of each aligned channel. It may not really be best to align on the darkest area like that...or, it may be that you just need to find an area of each image that you really know (educated guess or otherwise) should be the same relative intensity as each other, and use that area for alignment. 

 

It may also be that using multiple crops of various dark areas of the image and combining them together as alignment references might be better. Anyway, stuff to explore. wink.gif I keep meaning to write up a PI script to simplify the process, too, but I haven't had time to do it.

 

And thanks @ rosette. smile.gif


Edited by Jon Rista, 23 May 2019 - 01:36 PM.


#25 freestar8n

freestar8n

    MetaGuide

  • *****
  • Freeware Developers
  • Posts: 13,796
  • Joined: 12 Oct 2007

Posted 23 May 2019 - 04:52 PM

I don't do linear fit because I am assuming the channels are equally responsive.  And I do that because, without calibration, I don't know the difference in response.

 

If you have stars with known spectra that have been confirmed to be useful for narrowband calibration - then that would tell you the scaling needed.  In that case you just need to align the zero baseline of the channels.

 

But I don't see any reason to do a linear fit of one channel to another.  They are separate signals doing their own things.  And any time you make an aesthetic choice - that somewhat defeats the purpose of using a model in the first place.

 

The main thing I aim for is a data-driven image with as few parameter choices as possible.  The zero baseline comes from the data, the colors of each emission line are based on perceptual response - and the scaling of the lines is either 1.0 or some other number based on something else.  But the whole image can be described by a few numbers.

 

Then the whole thing is stretched to reveal faint background detail - but that is standard in nature photography and not inherently prone to artifact as long as it is done globally.

 

Frank




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics