Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

What really is the best reference frame for Local Normalization

  • Please log in to reply
16 replies to this topic

#1 AstroPics

AstroPics

    Ranger 4

  • *****
  • topic starter
  • Posts: 326
  • Joined: 11 Jan 2017
  • Loc: Atlanta, GA

Posted 09 October 2017 - 01:54 PM

I have been simply selecting the subframe with the best SNR as the reference for Local Normalization in PixInsight. After some additional reading, I am starting to question that. If I understand the actual purpose of local normalization, it is intended to simplify gradients and balance exposures.

 

Case and point. From my last imaging session, my L filter had very strong light pollution at the beginning of the series. This was because I had started far from the meridian. As the night progressed, my final L frame had the weakest contribution from light pollution (this was very close to the meridian). BUT, using SubFrame Selector, it turns out my first subframe had the best SNR.

 

So, Local Normalization is intended to 'balance' subframes. It seems I would actually want to use the last L subframe in my example above? Am I correct in assuming that all the subframes will be normalized to match the exposure and light gradient of the reference frame? How do people select the optimal Local Normalization subframe?

 

Also, has anyone played with the normalization scale? Does reducing this value make it work better at dealing with smaller gradient structures? My interest here is not so much on light pollution gradients but blobby background noise artifacts, potentially due to lack of dithering.

 

Lastly, there are two normalization options in Image Integration: one for pixel rejection and one for combination. I have been using both but wonder if there are any pros/cons to this for pixel rejection?



#2 Stelios

Stelios

    Cosmos

  • *****
  • Moderators
  • Posts: 8,318
  • Joined: 04 Oct 2003
  • Loc: West Hills, CA

Posted 09 October 2017 - 03:52 PM

I've always wondered about how SNR is calculated. I'm having a devil of a time combining frames from across sessions. I shot Ha on two separate moonless, cloudless nights, and SNR on the 2nd night was 20-30% higher. No earthly reason. 

 

At other times I've had a long sequence of frames where ostensibly the SNR drops--to the point where blink won't even show the nebula (Helix). But if you open the subframe *without* blink, then it shows up beautifully and is very clean. The SNR weighting would mean that the frame is underutilized.

 

I'm using the defaults as WK's book recommends, and hope I can gain some understanding so I can operate in non-zombie mode.



#3 AstroPics

AstroPics

    Ranger 4

  • *****
  • topic starter
  • Posts: 326
  • Joined: 11 Jan 2017
  • Loc: Atlanta, GA

Posted 10 October 2017 - 04:23 PM

A little puzzled about SNR as well. For the noise calculation at least (in the NoiseEvaluation script), the noise figure can be equated to the camera's read noise by taking the the sigma(k) value and multiplying by the camera's e-/ADU * MaxADUValue.

 

For example, in a Ha master image, my sigma(K) value was 1.088e-4 with my ASI1600MM. I was running at G139 so this equates to a gain of 1 e-/ADU. At 12 bits, the max ADU is 4096. So with those values, my noise is 0.4456e-. The full well at G139 is around 4k e- so the noise contribution is pretty minor.

 

It would probably be more insightful to evaluate noise on a per sub basis to ensure that the read noise is being exceeded by the recommended 10x factor.

 

All that being said, I still don't understand some non-intuitive SNR calculations. I wonder if PI somehow ignores strong light pollution for noise evaluation, i.e. it considers strong light pollution as a valid signal when performing a noise calculation (and doesn't use those pixels).



#4 DaveB

DaveB

    Apollo

  • *****
  • Posts: 1,392
  • Joined: 21 Nov 2007
  • Loc: Maryland

Posted 10 October 2017 - 04:32 PM

Did you ask this on the PI forum? You might get insight from one of the developers there.



#5 AstroPics

AstroPics

    Ranger 4

  • *****
  • topic starter
  • Posts: 326
  • Joined: 11 Jan 2017
  • Loc: Atlanta, GA

Posted 10 October 2017 - 04:37 PM

That's a good idea. I'll post a question over there and see what they say.



#6 FiremanDan

FiremanDan

    Aurora

  • *****
  • Posts: 4,814
  • Joined: 11 Apr 2014
  • Loc: Virginia

Posted 10 October 2017 - 05:46 PM

I saw it suggested someplace you could use a stacked image as a reference. I added 2 hours of Ha on a project and used the previous 4 hour stack as the reference. It seemed to work well. I didn't need to do DBE on it. I think I might have even done the 4 hour stack DBE'ed as the reference. 



#7 jlmanatee

jlmanatee

    Mariner 2

  • *****
  • Posts: 202
  • Joined: 03 Oct 2012
  • Loc: SE Minnesota

Posted 10 October 2017 - 06:09 PM

I use the evaluation criteria recommended by Kayron Mercieca (Light Vortex Astronomy)  in the Subframe Selector script to select the best light frames.  Then I use the frame that had the best calculated "weight" as the reference.  The "weight" is a combination of FWHM, eccentricity and SNR, calculated and plotted for each frame.  


Edited by jlmanatee, 10 October 2017 - 06:11 PM.

  • okiedrifter, WesC, FiremanDan and 1 other like this

#8 NorthField

NorthField

    Viking 1

  • *****
  • Posts: 987
  • Joined: 01 Jun 2017
  • Loc: SW Missouri

Posted 10 October 2017 - 06:18 PM

^ that’s what I do too, I run a quick ABE on it first just because it feels right, but Kayron says that doesn’t make any difference...

#9 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,085
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 10 October 2017 - 06:27 PM

I've always wondered about how SNR is calculated. I'm having a devil of a time combining frames from across sessions. I shot Ha on two separate moonless, cloudless nights, and SNR on the 2nd night was 20-30% higher. No earthly reason.


If you guys are curious about how PI measures noise, this is how:
 
https://pixinsight.c...description_002
 
In SubframeSelector, the SNRWeight value is MeanDeviation^2/Noise^2. MeanDeviation is the MAD (Mean Absolute Deviation) from the median, and Noise here is calculated using the above approach.

At other times I've had a long sequence of frames where ostensibly the SNR drops--to the point where blink won't even show the nebula (Helix). But if you open the subframe *without* blink, then it shows up beautifully and is very clean. The SNR weighting would mean that the frame is underutilized.
 
I'm using the defaults as WK's book recommends, and hope I can gain some understanding so I can operate in non-zombie mode.


Well, what you see with an STF is NOT representative of the true nature of the data. Normalization, which is performed by ImageIntegration to make the data in all teh frames compatible, is essential to getting the best SNR. Weighting subs is a part of that process, and if a sub actually has poor quality data, it SHOULD be weighted down, so that it doesn't hurt the SNR of your final integration. Don't let what you can observe visually with an STF fool you. STF is designed to make all subs look similar, as it attempts to make the background noise LOOK the same regardless of the data. But it lies! tongue2.gif Sadly, I think STF has lead to many misconceptions about the true nature of people's data. It is very useful for what it was intended to do, but it does not normalize, and as such it greatly skews data comparisons if you are not aware of how it works or the true nature of your data.

Edited by Jon Rista, 10 October 2017 - 06:32 PM.


#10 AstroPics

AstroPics

    Ranger 4

  • *****
  • topic starter
  • Posts: 326
  • Joined: 11 Jan 2017
  • Loc: Atlanta, GA

Posted 11 October 2017 - 09:54 AM

I saw it suggested someplace you could use a stacked image as a reference. I added 2 hours of Ha on a project and used the previous 4 hour stack as the reference. It seemed to work well. I didn't need to do DBE on it. I think I might have even done the 4 hour stack DBE'ed as the reference. 

I had tried something similar where I went through the entire image integration process, performed a DBE and then went back and performed normalization. This followed by a new image integration. I don't think that I saw any major improvement in SNR in the final master image (vs. simply picking a single reference subframe initially for normalization). But, as you point out, there was no need to perform a DBE after normalization. My only issue with this approach is that it adds considerable time to the calibration and integration time.

 

 

I've always wondered about how SNR is calculated. I'm having a devil of a time combining frames from across sessions. I shot Ha on two separate moonless, cloudless nights, and SNR on the 2nd night was 20-30% higher. No earthly reason.


If you guys are curious about how PI measures noise, this is how:
 
https://pixinsight.c...description_002
 
In SubframeSelector, the SNRWeight value is MeanDeviation^2/Noise^2. MeanDeviation is the MAD (Mean Absolute Deviation) from the median, and Noise here is calculated using the above approach.

Jon, the link you provided disproves my theory about light gradients impacting the SNR calculation. From the link:

 

"The ... weighting function is robust and efficient, and works well even when the images include relatively strong gradients."

 

So, still puzzled why I would get better SNR weighting imaging farther from the meridian. Seems counter-intuitive somehow.



#11 BenKolt

BenKolt

    Vanguard

  • *****
  • Posts: 2,091
  • Joined: 13 Mar 2013

Posted 11 October 2017 - 12:16 PM

I have been simply selecting the subframe with the best SNR as the reference for Local Normalization in PixInsight. After some additional reading, I am starting to question that. If I understand the actual purpose of local normalization, it is intended to simplify gradients and balance exposures.

 

Case and point. From my last imaging session, my L filter had very strong light pollution at the beginning of the series. This was because I had started far from the meridian. As the night progressed, my final L frame had the weakest contribution from light pollution (this was very close to the meridian). BUT, using SubFrame Selector, it turns out my first subframe had the best SNR.

 

So, Local Normalization is intended to 'balance' subframes. It seems I would actually want to use the last L subframe in my example above? Am I correct in assuming that all the subframes will be normalized to match the exposure and light gradient of the reference frame? How do people select the optimal Local Normalization subframe?

 

Also, has anyone played with the normalization scale? Does reducing this value make it work better at dealing with smaller gradient structures? My interest here is not so much on light pollution gradients but blobby background noise artifacts, potentially due to lack of dithering.

 

Lastly, there are two normalization options in Image Integration: one for pixel rejection and one for combination. I have been using both but wonder if there are any pros/cons to this for pixel rejection?

 

AstroPics:

 

You're asking a lot of the same questions that I have lately whilst I've been exploring LocalNormalization, ImageIntegration, DrizzleIntegration and all the parameters contained therein.  At this time I've found my best results are usually from weighting the subframes using SubframeSelector and a weighting expression similar to the one jlmanatee and NorthField mentioned from the Light Vortex Astronomy tutorial, although I oscillate between weighting SNRWeight or FWHM more heavily depending upon the image.  The best weight subframe then becomes my reference for LocalNormalization after I have registered the frames with StarAlignment.

 

After this I've had varied results depending on the steps that I take.  I integrate the registered frames using the local normalized files both with the subframe weighting FITs header keyword and without it.  In the latter case I weight according to ImageIntegration's own Noise evaluation.  I'm not sure that overall noise sigma evaluation of the integrated result tells the whole story.  At times I end up with an integrated image in which the background noise is smoothed the way I like it so that gradient removal is a straightforward and relatively easy step.  Other times I find that the background region around brighter objects (like bright stars, galaxies, etc) are either suppressed below or inflated above the average background.  This all seems to depend upon my choice of local normalization reference.  As yet I have not been able to correlate this well, but I'm in the middle of studying this.  Having the local background around bright objects may not necessarily be a bad thing if I can effectively perform background extraction, but in the latter case it becomes more complicated.

 

If I can come up with more definitive test results, I'll post them on this forum, but it's been a long and tedious process.

 

 

I've always wondered about how SNR is calculated. I'm having a devil of a time combining frames from across sessions. I shot Ha on two separate moonless, cloudless nights, and SNR on the 2nd night was 20-30% higher. No earthly reason. 

 

At other times I've had a long sequence of frames where ostensibly the SNR drops--to the point where blink won't even show the nebula (Helix). But if you open the subframe *without* blink, then it shows up beautifully and is very clean. The SNR weighting would mean that the frame is underutilized.

 

I'm using the defaults as WK's book recommends, and hope I can gain some understanding so I can operate in non-zombie mode.

Stelios

 

I've been learning to use Blink a little more effectively lately.  There are two buttons at your disposal, one of which performs an auto STF on each individual frame, the other performs an auto STF on the currently selected frame and then applies the same stretch to all the others.  I have found that these actually behave like toggle switches in that hitting that first button again seems to reset the stretch on each image (or something else), and I often have to hit it once again.  You may want to play around with Blink and pressing those buttons more than once to come to your own conclusion about what they are doing.

 

Lastly, I agree with Jon (this is the wisest course of action since we all know he really knows his stuff!) that the STF can be both an enormously useful tool as well as a devious demon in leading us astray!

 

Best Regards,

Ben



#12 Stelios

Stelios

    Cosmos

  • *****
  • Moderators
  • Posts: 8,318
  • Joined: 04 Oct 2003
  • Loc: West Hills, CA

Posted 11 October 2017 - 02:01 PM

 

I've always wondered about how SNR is calculated. I'm having a devil of a time combining frames from across sessions. I shot Ha on two separate moonless, cloudless nights, and SNR on the 2nd night was 20-30% higher. No earthly reason.


If you guys are curious about how PI measures noise, this is how:
 
https://pixinsight.c...description_002
 
In SubframeSelector, the SNRWeight value is MeanDeviation^2/Noise^2. MeanDeviation is the MAD (Mean Absolute Deviation) from the median, and Noise here is calculated using the above approach.

At other times I've had a long sequence of frames where ostensibly the SNR drops--to the point where blink won't even show the nebula (Helix). But if you open the subframe *without* blink, then it shows up beautifully and is very clean. The SNR weighting would mean that the frame is underutilized.
 
I'm using the defaults as WK's book recommends, and hope I can gain some understanding so I can operate in non-zombie mode.


Well, what you see with an STF is NOT representative of the true nature of the data. Normalization, which is performed by ImageIntegration to make the data in all teh frames compatible, is essential to getting the best SNR. Weighting subs is a part of that process, and if a sub actually has poor quality data, it SHOULD be weighted down, so that it doesn't hurt the SNR of your final integration. Don't let what you can observe visually with an STF fool you. STF is designed to make all subs look similar, as it attempts to make the background noise LOOK the same regardless of the data. But it lies! tongue2.gif Sadly, I think STF has lead to many misconceptions about the true nature of people's data. It is very useful for what it was intended to do, but it does not normalize, and as such it greatly skews data comparisons if you are not aware of how it works or the true nature of your data.

 

There's a mysterious  a = img.noiseMRS( n );  // but where's the code for the noiseMRS method? How is *that* determined? a apparently is an array, but of what?

 

When I have some time, I need to dig deeper into that, and read the whole thing. The code is easy to follow, but takes more time than I have now.



#13 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,085
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 11 October 2017 - 03:17 PM

 

 

I've always wondered about how SNR is calculated. I'm having a devil of a time combining frames from across sessions. I shot Ha on two separate moonless, cloudless nights, and SNR on the 2nd night was 20-30% higher. No earthly reason.


If you guys are curious about how PI measures noise, this is how:
 
https://pixinsight.c...description_002
 
In SubframeSelector, the SNRWeight value is MeanDeviation^2/Noise^2. MeanDeviation is the MAD (Mean Absolute Deviation) from the median, and Noise here is calculated using the above approach.

At other times I've had a long sequence of frames where ostensibly the SNR drops--to the point where blink won't even show the nebula (Helix). But if you open the subframe *without* blink, then it shows up beautifully and is very clean. The SNR weighting would mean that the frame is underutilized.
 
I'm using the defaults as WK's book recommends, and hope I can gain some understanding so I can operate in non-zombie mode.


Well, what you see with an STF is NOT representative of the true nature of the data. Normalization, which is performed by ImageIntegration to make the data in all teh frames compatible, is essential to getting the best SNR. Weighting subs is a part of that process, and if a sub actually has poor quality data, it SHOULD be weighted down, so that it doesn't hurt the SNR of your final integration. Don't let what you can observe visually with an STF fool you. STF is designed to make all subs look similar, as it attempts to make the background noise LOOK the same regardless of the data. But it lies! tongue2.gif Sadly, I think STF has lead to many misconceptions about the true nature of people's data. It is very useful for what it was intended to do, but it does not normalize, and as such it greatly skews data comparisons if you are not aware of how it works or the true nature of your data.

 

There's a mysterious  a = img.noiseMRS( n );  // but where's the code for the noiseMRS method? How is *that* determined? a apparently is an array, but of what?

 

When I have some time, I need to dig deeper into that, and read the whole thing. The code is easy to follow, but takes more time than I have now.

 

The MRS noise evaluation formula is described in the link I shared above. That IS MRS noise. MRS just means multi-resolution support, as it's a wavelet based noise evaluation algorithm. Anyway, check out the link, everything you are looking for should be in there. The ImageIntegration documentation actually contains more of the gritty details of PI's innards than just about any other article they have, I think. It's extensive, detailed, and pretty low level. 



#14 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 24,085
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 11 October 2017 - 03:22 PM

Jon, the link you provided disproves my theory about light gradients impacting the SNR calculation. From the link:
 
"The ... weighting function is robust and efficient, and works well even when the images include relatively strong gradients."
 
So, still puzzled why I would get better SNR weighting imaging farther from the meridian. Seems counter-intuitive somehow.

This is a pretty new feature if PI. I honestly don't know enough about it yet to really answer. The guy who really knows is the guy who wrote it, which is probably Juan over at the PI forums. If you have questions, he is really the best person to ask. Another guy who I'm sure understands the process is David Ault, who floats around the forums every so often. He might also be another guy to ask.

From what I do understand, picking a proper reference frame is important. In my own experience, the default settings may not generally provide the best results, and if you really want to maximize the potential that LocalNormalization has to offer, you gotta fiddle with it. I just haven't really dove in too deep yet, as it'll be time consuming once I do. tongue2.gif That said, I think the simplest thing you can do is compare the ImageIntegration results with LN, and with the built-in normalization features that ImageIntegration has always had, and see which does better. If you consistently get better results just with ImageIntegration, I wouldn't bother with LocalNormalization. At least, not yet. Give it some time, and people will start coming up with best practices and usage tips for this new tool.

#15 mikefulb

mikefulb

    Surveyor 1

  • *****
  • Posts: 1,750
  • Joined: 17 Apr 2006

Posted 11 October 2017 - 03:33 PM

I played with it some myself and in the little time I looked at it I wasn't convinced it would help at all with my images which have very little gradient.  It seems most suited for when large gradients result in gradients in background noise levels across a frame.  It is unfortunate they choose not to devote time to good documentation - nothing is worse than good code which goes unused due to a lack of docs.  With time I'm sure the community will carry the load as they have always.



#16 pfile

pfile

    Fly Me to the Moon

  • -----
  • Posts: 5,426
  • Joined: 14 Jun 2009

Posted 11 October 2017 - 05:24 PM

juan says he is working on a writeup for LocalNormalization. it will appear on the PI forum when he's finished...

 

in the end this is not anything really "new" - there seems to be some confusion about this - as jon says frames must always be normalized to each other, or (for instance) rejection would never work. ImageIntegration has always done frame normalization (as does any tool which can integrate astronomical images.)

 

what's new here is that LN can consider the image at scales smaller than "the whole image". traditional normalization is something like a linear fit across the entire frame. so if there is a gradient in your target frame, the normalization will be less than perfect, since you're kind of normalizing to the 'average' value of the pixels in the whole image. if configured properly, LN can essentially remove LP gradients from target images. this is why it's really important to choose the cleanest, most LP-free image in your stack as a reference. if you introduce artifacts into the reference (for instance with a poor DBE), you'll end up putting those artifacts into all your subs by using LN.

 

as an aside, the first image in the list of images in ImageIntegration is the "reference" frame against which all SNR calculations and (traditional) normalization are performed. so it's always a good idea to have a high quality frame as the first frame in II, regardless of whether or not you used LN.

 

anyhow, hopefully the scales will fall from our collective eyes once juan finishes the tutorial.

 

rob


  • H-Alfa and Jon Rista like this

#17 AstroPics

AstroPics

    Ranger 4

  • *****
  • topic starter
  • Posts: 326
  • Joined: 11 Jan 2017
  • Loc: Atlanta, GA

Posted 13 October 2017 - 11:55 AM

 

I have been simply selecting the subframe with the best SNR as the reference for Local Normalization in PixInsight. After some additional reading, I am starting to question that. If I understand the actual purpose of local normalization, it is intended to simplify gradients and balance exposures.

 

Case and point. From my last imaging session, my L filter had very strong light pollution at the beginning of the series. This was because I had started far from the meridian. As the night progressed, my final L frame had the weakest contribution from light pollution (this was very close to the meridian). BUT, using SubFrame Selector, it turns out my first subframe had the best SNR.

 

So, Local Normalization is intended to 'balance' subframes. It seems I would actually want to use the last L subframe in my example above? Am I correct in assuming that all the subframes will be normalized to match the exposure and light gradient of the reference frame? How do people select the optimal Local Normalization subframe?

 

Also, has anyone played with the normalization scale? Does reducing this value make it work better at dealing with smaller gradient structures? My interest here is not so much on light pollution gradients but blobby background noise artifacts, potentially due to lack of dithering.

 

Lastly, there are two normalization options in Image Integration: one for pixel rejection and one for combination. I have been using both but wonder if there are any pros/cons to this for pixel rejection?

 

AstroPics:

 

You're asking a lot of the same questions that I have lately whilst I've been exploring LocalNormalization, ImageIntegration, DrizzleIntegration and all the parameters contained therein.  At this time I've found my best results are usually from weighting the subframes using SubframeSelector and a weighting expression similar to the one jlmanatee and NorthField mentioned from the Light Vortex Astronomy tutorial, although I oscillate between weighting SNRWeight or FWHM more heavily depending upon the image.  The best weight subframe then becomes my reference for LocalNormalization after I have registered the frames with StarAlignment.

 

After this I've had varied results depending on the steps that I take.  I integrate the registered frames using the local normalized files both with the subframe weighting FITs header keyword and without it.  In the latter case I weight according to ImageIntegration's own Noise evaluation.  I'm not sure that overall noise sigma evaluation of the integrated result tells the whole story.  At times I end up with an integrated image in which the background noise is smoothed the way I like it so that gradient removal is a straightforward and relatively easy step.  Other times I find that the background region around brighter objects (like bright stars, galaxies, etc) are either suppressed below or inflated above the average background.  This all seems to depend upon my choice of local normalization reference.  As yet I have not been able to correlate this well, but I'm in the middle of studying this.  Having the local background around bright objects may not necessarily be a bad thing if I can effectively perform background extraction, but in the latter case it becomes more complicated.

 

If I can come up with more definitive test results, I'll post them on this forum, but it's been a long and tedious process.

 

Ben

 

Ben,

 

I've pretty much decided to forego LocalNormalization until I understand it better. I agree with your comment that using a weighting expression from SubFrameSelector tends to give some good results. I haven't seen the LightVortex tutorial on it but have been referring to Keller's 'Inside PixInsight' for an approach and using the spreadsheet he references in the book. Would definitely be good to understand what is considered an optimal weighting expression. I really do like my stars tight and round so I put some emphasis on eccentricity and FWHM.

 

I'd be interested in what you find out about LocalNormalization. I had recently taken an image of the Deer Lick group and found LocalNormalization seemed to create an exceedingly dark area around the galaxies. As you mentioned, this could be just a poor choice of a reference frame but I really don't have a clear idea of what is optimal.

 

 

Jon, the link you provided disproves my theory about light gradients impacting the SNR calculation. From the link:
 
"The ... weighting function is robust and efficient, and works well even when the images include relatively strong gradients."
 
So, still puzzled why I would get better SNR weighting imaging farther from the meridian. Seems counter-intuitive somehow.

This is a pretty new feature if PI. I honestly don't know enough about it yet to really answer. The guy who really knows is the guy who wrote it, which is probably Juan over at the PI forums. If you have questions, he is really the best person to ask. Another guy who I'm sure understands the process is David Ault, who floats around the forums every so often. He might also be another guy to ask.

From what I do understand, picking a proper reference frame is important. In my own experience, the default settings may not generally provide the best results, and if you really want to maximize the potential that LocalNormalization has to offer, you gotta fiddle with it. I just haven't really dove in too deep yet, as it'll be time consuming once I do. tongue2.gif That said, I think the simplest thing you can do is compare the ImageIntegration results with LN, and with the built-in normalization features that ImageIntegration has always had, and see which does better. If you consistently get better results just with ImageIntegration, I wouldn't bother with LocalNormalization. At least, not yet. Give it some time, and people will start coming up with best practices and usage tips for this new tool.

 

I have to agree. It is a long process to iterate through LocalNormalization so it tends to dissuade me from experimenting heavily with it. I think I may just table it for now and use SubFrameSelector and ImageIntegration for now until the community or PI team provides some best practices.

 

juan says he is working on a writeup for LocalNormalization. it will appear on the PI forum when he's finished...

 

in the end this is not anything really "new" - there seems to be some confusion about this - as jon says frames must always be normalized to each other, or (for instance) rejection would never work. ImageIntegration has always done frame normalization (as does any tool which can integrate astronomical images.)

 

what's new here is that LN can consider the image at scales smaller than "the whole image". traditional normalization is something like a linear fit across the entire frame. so if there is a gradient in your target frame, the normalization will be less than perfect, since you're kind of normalizing to the 'average' value of the pixels in the whole image. if configured properly, LN can essentially remove LP gradients from target images. this is why it's really important to choose the cleanest, most LP-free image in your stack as a reference. if you introduce artifacts into the reference (for instance with a poor DBE), you'll end up putting those artifacts into all your subs by using LN.

 

as an aside, the first image in the list of images in ImageIntegration is the "reference" frame against which all SNR calculations and (traditional) normalization are performed. so it's always a good idea to have a high quality frame as the first frame in II, regardless of whether or not you used LN.

 

anyhow, hopefully the scales will fall from our collective eyes once juan finishes the tutorial.

 

rob

Definitely looking forward to some proper documentation on LocalNormalization! Your comment (and Ben as well) seems to align with some of my original thinking. The best reference frame may in fact be the one with the least light pollution (and the best SNR frame may in fact be irrelevant).




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics