Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Pixinsight: What does LocalNormalization do?

  • Please log in to reply
12 replies to this topic

#1 astrovienna

astrovienna

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2622
  • Joined: 04 Dec 2006
  • Loc: The NoVa White Zone

Posted 22 June 2018 - 07:57 AM

I seem to be the last one who doesn't understand it . . . Recently I used it with good results to blend in subframes that were rotated 45 degrees off the rest of the subs.  But now I'm using it on globular cluster, and I find that it's leaving haze between the small stars on the periphery of the cluster. Without LNorm, that space is nice and black.  I've tried 128 and 256 for the settings.  128 left dark halos around the stars, and 256 solved that.  BTW, I also discovered that the integrated master with LNorm had a very high background - around 0.07 - vs 0.004 without LNorm.  I'm not sure why that would be.  Thanks for any help.

 

Kevin



#2 ChrisWhite

ChrisWhite

    Skylab

  • *****
  • Posts: 4009
  • Joined: 28 Feb 2015
  • Loc: Colchester, VT

Posted 22 June 2018 - 08:07 AM

Check this out:  https://www.cloudyni...on-illustrated/


  • bobzeq25 likes this

#3 pfile

pfile

    Aurora

  • -----
  • Posts: 4801
  • Joined: 14 Jun 2009

Posted 22 June 2018 - 10:54 AM

not to dump on dr mike again, but his initial gif is nothing 'special'. normalization always has to happen behind the scenes in ImageIntegration or else you could not properly do pixel rejection. you would get the same result loading everything into Blink and asking for a per-image STF, which is akin to what II is doing behind the scenes (but without the stretch).

 

that being said, what localnormalization tries to do is to normalize the image in smaller chunks vs. just applying a single linear fit to the entire image. in theory this should help remove gradients and maybe leftover bad flattening of dust spots and things, as long as the LN reference is pristine.

 

i feel like LN is somewhat of an advanced tool but because a bunch of tutorials just "blindly" put it in the flow, people have started using it and find that it is very difficult to tune properly. for my part i've found that although it can flatten out the background in galaxy images, that for whatever reason it is tweaking the target pixels too much and making good color balancing impossible. i havent had the patience to keep tweaking it to figure out what's wrong, since i can accomplish the background flattening fine with DBE.

 

rob


  • Jeff2011, Jon Rista, bmhjr and 2 others like this

#4 drmikevt

drmikevt

    Viking 1

  • *****
  • Posts: 814
  • Joined: 22 Jun 2015
  • Loc: Burlington, VT

Posted 22 June 2018 - 11:34 AM

Its okay, Rob.  You can dump on me - as long as its in the spirit of trying to put out correct information.  Your perspective is always helpful.  Certainly, this process should not be a default workflow step.

 

I understand that "you would get the same result loading everything into Blink and asking for a per-image STF" but the point was that it was not doing that - that it was the same STF applied to all images in both gifs.  Doesn't that mean that the images were successfully normalized to each other?  It may not have been too special, but at least it worked...?  For me, it did seem to improve the final SNR of the master.  

 

And, while we are discussing things, "normalization always has to happen behind the scenes in ImageIntegration or else you could not properly do pixel rejection" - yes, but isn't this for pixel rejections calculations only?  II is not actually changing the images to make them more normalized before stacking, is it?  Is so, it would seem to be in conflict with the weighting algorithm (why weight the images if they are all going to be normalized?).  It is certainly possible that I do not understand things fully enough.

 

I have no choice but to image on less then perfect nights, resulting in wide ranges in background levels and gradients.  For me, this process has proven helpful in most cases.  I understand that many people find that it introduces artifacts.  Maybe their data is just not bad enough.



#5 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22668
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 22 June 2018 - 11:34 AM

I am with Rob here. Local Normalization aims to normalize the images through local evaluation, rather than global evaluation. It can eliminate certain differences between the subs if it is done right, producing a more consistent integration that has less severe (or no) gradients.

 

That said, using it well is not as easy as it is often made out to be. More often than not, when LN is used these days according to more simplistic explanations, the results are often worse than if LN is not used. It can result in mottling of the data, or the addition of rather than elimination of medium to larger scale artifacts, etc. Take care when using it, and make sure you adjust the settings according to the scale of your image, to get the best results. This may take some trial and error, which since multiple pre-processing tools are involved and the full dataset needs to be run each time, can take a while to do right. 


  • pfile likes this

#6 astrovienna

astrovienna

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2622
  • Joined: 04 Dec 2006
  • Loc: The NoVa White Zone

Posted 22 June 2018 - 11:44 AM

Thanks folks. Your description helps explain the result on the Abell 21 image, which was shot across many different nights, each with its own gradient.  It sounds like the general idea of LNorm is to try to match the background (and gradient) of a single sub, so that the integrated master has only a simple gradient to be removed rather than a complex sum of many different gradients.  I had read that thread, and the forum discussion that it linked to, but hadn't fully understood it.

 

I have no choice but to image on less then perfect nights, resulting in wide ranges in background levels and gradients.  For me, this process has proven helpful in most cases.  I understand that many people find that it introduces artifacts.  Maybe their data is just not bad enough.

Oh, shooting LRGB through Bortle 8 skies makes my data plenty bad enough, Mike!  :) 

 

I appreciate the guidance that LNorm may not be a standard step in my workflow.  It was starting to sound like everyone was using it that way, and I couldn't understand what I was missing.  I'll compare results going forward.

 

Kevin



#7 pfile

pfile

    Aurora

  • -----
  • Posts: 4801
  • Joined: 14 Jun 2009

Posted 22 June 2018 - 12:30 PM

Its okay, Rob.  You can dump on me - as long as its in the spirit of trying to put out correct information.  Your perspective is always helpful.  Certainly, this process should not be a default workflow step.

 

I understand that "you would get the same result loading everything into Blink and asking for a per-image STF" but the point was that it was not doing that - that it was the same STF applied to all images in both gifs.  Doesn't that mean that the images were successfully normalized to each other?  It may not have been too special, but at least it worked...?  For me, it did seem to improve the final SNR of the master.  

 

And, while we are discussing things, "normalization always has to happen behind the scenes in ImageIntegration or else you could not properly do pixel rejection" - yes, but isn't this for pixel rejections calculations only?  II is not actually changing the images to make them more normalized before stacking, is it?  Is so, it would seem to be in conflict with the weighting algorithm (why weight the images if they are all going to be normalized?).  It is certainly possible that I do not understand things fully enough.

 

I have no choice but to image on less then perfect nights, resulting in wide ranges in background levels and gradients.  For me, this process has proven helpful in most cases.  I understand that many people find that it introduces artifacts.  Maybe their data is just not bad enough.

 

yes, the images were successfully normalized to one another. the STF/blink thing was just an expedient example. probably using ImageContainer with LinearFit would be more like the LN process in the sense that it will write out normalized images to disk which then in theory would work OK with the same STF, or very nearly so. i think the point is that we've never had to manually normalize because II just does it internally. i think since LN is complex enough, Juan decided to make it its own process and pass the results to II thru sidecar files rather than putting a bunch of new controls in II itself. but in spirit it's part of the integration task.

 

i just read the source for ImageIntegration and while it's a little hard to follow (C++ can be a bit obtuse), i think it's the normalized pixels that are stacked (after rejection). i suppose that makes sense, since you've already gone thru the trouble of normalizing them.

 

as far as weighting is concerned, by default if you are weighting by SNR for instance, normalization is not going to change the SNR of a frame. so it still makes sense to do weighting whether or not you normalize or not.

 

i also have terrible skies/gradients (and apparently internal reflections in my refractor off of the flattener) and recently decided to try LN using a stacked and carefully DBE'd frame as the reference. while the background of the LN'd integration looked great, i could tell that the subject (M101) had been changed by the LN routine - the lanes of the galaxy were darker (more contrast) than they were in the non-LN image. and then when i did all 3 channels and combined them (with unique references in LN of course), the galaxy color was all weird even after PCC. doing LinearFit to one channel's LN and non-LN frames and then dividing the two, i could see that there were a lot of differences in the galaxy itself. at that point i got tired of messing with LN settings and decided to just go the traditional route. maybe there was a set of LN settings that would have done the right thing, i don't know.

 

things might be different in a "busier" image? all i know is that on these widefield galaxy images, LN has been very hard for me to perfect.

 

rob



#8 drmikevt

drmikevt

    Viking 1

  • *****
  • Posts: 814
  • Joined: 22 Jun 2015
  • Loc: Burlington, VT

Posted 22 June 2018 - 02:22 PM

Rob

 

Thanks - that is all very interesting.  Thinking about it as an extension of the algorithms in II is helpful.  And, I had thought of the comparison to LinearFit, but hopefully LN is also addressing some gradients. 

 

Its also interesting - I have only used this on NB images so far.  Maybe the intricate detail of galaxies is something it just can't deal with.  



#9 DaveB

DaveB

    Apollo

  • -----
  • Posts: 1240
  • Joined: 21 Nov 2007
  • Loc: New England

Posted 22 June 2018 - 02:52 PM

I seem to be the last one who doesn't understand it . . . Recently I used it with good results to blend in subframes that were rotated 45 degrees off the rest of the subs.  But now I'm using it on globular cluster, and I find that it's leaving haze between the small stars on the periphery of the cluster. Without LNorm, that space is nice and black.  I've tried 128 and 256 for the settings.  128 left dark halos around the stars, and 256 solved that.  BTW, I also discovered that the integrated master with LNorm had a very high background - around 0.07 - vs 0.004 without LNorm.  I'm not sure why that would be.  Thanks for any help.

 

Kevin

From my experience, I'm not surprised that LNorm is creating a foggy haze between the stars in a globular. I now rarely use LNorm now because I usually wasn't seeing any benefit and sometimes would see problems. I found that if the sky background around a bright star wasn't dark (due to light pollution, the moon, a nebula, etc.), then there would be a slight brightening of the background around that star, and LNorm would accentuate that brightening into a foggy halo. In a globular, the background is bright due to dim stars between the brighter ones. Increasing the Scale number increases the radius around any given pixel that is used to normalize that pixel. If the value is too small, it amplifies any local variations. As you increase the Scale, it will dampen this localized effect.

 

See https://www.cloudyni...them/?p=8410642 for an example.


  • Wwilmoth69 likes this

#10 Wwilmoth69

Wwilmoth69

    Ranger 4

  • -----
  • Posts: 319
  • Joined: 06 Jun 2017

Posted 15 March 2019 - 07:34 AM

From my experience, I'm not surprised that LNorm is creating a foggy haze between the stars in a globular. I now rarely use LNorm now because I usually wasn't seeing any benefit and sometimes would see problems. I found that if the sky background around a bright star wasn't dark (due to light pollution, the moon, a nebula, etc.), then there would be a slight brightening of the background around that star, and LNorm would accentuate that brightening into a foggy halo. In a globular, the background is bright due to dim stars between the brighter ones. Increasing the Scale number increases the radius around any given pixel that is used to normalize that pixel. If the value is too small, it amplifies any local variations. As you increase the Scale, it will dampen this localized effect.

See https://www.cloudyni...them/?p=8410642 for an example.



I'm just starting to use ln and drizz integration and get the same fog around bright stars and clusters

I thought I was picking up dust clouds at first Haha

I love the drizzle but think my problem lies in my ln settings

not sure where to start changing the settings in ln

has anyone figured out better than default settings for it

or can you drizz without having to do ln I would like to see if theres a diff but it says missing ln data so I assume DI needs LN to work but I'm a newb so idk

#11 Jeff2011

Jeff2011

    Gemini

  • *****
  • Posts: 3473
  • Joined: 01 Jan 2013
  • Loc: Sugar Land, TX

Posted 15 March 2019 - 07:52 AM

I have found LN useful if I image from my light polluted backyard which has hugely varying gradients over the course of an imaging session and over multiple nights.   From a dark site, I have found that LN does not help much and does more harm than good.



#12 WadeH237

WadeH237

    Skylab

  • *****
  • Posts: 4007
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 15 March 2019 - 11:35 PM

From my experience, I'm not surprised that LNorm is creating a foggy haze between the stars in a globular. I now rarely use LNorm now because I usually wasn't seeing any benefit and sometimes would see problems. I found that if the sky background around a bright star wasn't dark (due to light pollution, the moon, a nebula, etc.), then there would be a slight brightening of the background around that star, and LNorm would accentuate that brightening into a foggy halo. In a globular, the background is bright due to dim stars between the brighter ones. Increasing the Scale number increases the radius around any given pixel that is used to normalize that pixel. If the value is too small, it amplifies any local variations. As you increase the Scale, it will dampen this localized effect.

 

See https://www.cloudyni...them/?p=8410642 for an example.

LocalNormalization is only as good as the reference frame that you use with it.

 

In my case, I specifically use it when I have subs where high, thin clouds have moved through and resulted in gradients or other patches of brightness.  My method is to go through the stack of subs and find a set with none of those effects - essentially properly illuminated fields.  I then integrate that subset of the subs.  That gives me a single frame without excessive gradients.  I then use DBE on that frame to flatten the background.  I then use that frame as the reference for LocalNormalization.

 

I don't see any of the effects that your are describing, and I am very happy with the way that it lets me use subs that I might otherwise throw away due to illumination problems.



#13 roofkid

roofkid

    Viking 1

  • -----
  • Posts: 584
  • Joined: 13 Jan 2016

Posted 16 March 2019 - 03:40 AM

I have the same opinion as Jon here. I have experimented a lot with it but never got good results, only worse. That's why I only use "global" normalization for now and my integration results have been great.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics