Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Pi MLN tool

  • Please log in to reply
19 replies to this topic

#1 calypsob

calypsob

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2917
  • Joined: 20 Apr 2013
  • Loc: Roanoke, Virginia

Posted 13 August 2017 - 11:04 AM

I am interested in the new MLN tool for Pi.  I am curious if anyone has used it successfully and if so, do you mind sharing you workflow and also how you decided what settings to use with your data?  I have a ton of unprocessed data and I would like to experiment with this a bit, however at the moment I am pretty unfamiliar with the available parameters in the tool.  It is even more confusing to me when you load the files into image integration because there are so many options.  Right now trial and error is my only approach but I am not sure how to tell if I am using the right settings in the tool before I even get to image integration.  


Edited by calypsob, 13 August 2017 - 11:05 AM.


#2 entilza

entilza

    Gemini

  • *****
  • Posts: 3111
  • Joined: 06 Oct 2014
  • Loc: Oakville, ON, Canada

Posted 13 August 2017 - 11:11 AM

Check out last weeks astro imaging channel where David Ault did a nice job of two of the new tools.

https://www.testing....gingchannel.com

You may need to look at the previous episodes if you see this message too late.
  • calypsob likes this

#3 rockstarbill

rockstarbill

    Mercury-Atlas

  • *****
  • Posts: 2664
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 13 August 2017 - 11:25 AM

I am interested in the new MLN tool for Pi.  I am curious if anyone has used it successfully and if so, do you mind sharing you workflow and also how you decided what settings to use with your data?  I have a ton of unprocessed data and I would like to experiment with this a bit, however at the moment I am pretty unfamiliar with the available parameters in the tool.  It is even more confusing to me when you load the files into image integration because there are so many options.  Right now trial and error is my only approach but I am not sure how to tell if I am using the right settings in the tool before I even get to image integration.  

MLN? Do you mean Local Normalization? 



#4 calypsob

calypsob

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2917
  • Joined: 20 Apr 2013
  • Loc: Roanoke, Virginia

Posted 13 August 2017 - 11:34 AM

 

I am interested in the new MLN tool for Pi.  I am curious if anyone has used it successfully and if so, do you mind sharing you workflow and also how you decided what settings to use with your data?  I have a ton of unprocessed data and I would like to experiment with this a bit, however at the moment I am pretty unfamiliar with the available parameters in the tool.  It is even more confusing to me when you load the files into image integration because there are so many options.  Right now trial and error is my only approach but I am not sure how to tell if I am using the right settings in the tool before I even get to image integration.  

MLN? Do you mean Local Normalization? 

 

apparently the final version was named MLN instead of frame adaptation.  

 

 

mln.JPG


Edited by calypsob, 13 August 2017 - 11:35 AM.


#5 rockstarbill

rockstarbill

    Mercury-Atlas

  • *****
  • Posts: 2664
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 13 August 2017 - 11:37 AM

That tool has been around. The new one is called Local Normalization. I have been using it the same way David mentioned in his presentation on YouTube, although I do not DBE the reference image at all, as some other folks have mentioned.



#6 calypsob

calypsob

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2917
  • Joined: 20 Apr 2013
  • Loc: Roanoke, Virginia

Posted 13 August 2017 - 11:37 AM

Check out last weeks astro imaging channel where David Ault did a nice job of two of the new tools.

https://www.testing....gingchannel.com

You may need to look at the previous episodes if you see this message too late.

Awesome.Much appreciated!



#7 calypsob

calypsob

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2917
  • Joined: 20 Apr 2013
  • Loc: Roanoke, Virginia

Posted 13 August 2017 - 11:42 AM

That tool has been around. The new one is called Local Normalization. I have been using it the same way David mentioned in his presentation on YouTube, although I do not DBE the reference image at all, as some other folks have mentioned.

Bill, see the last thread of this link https://pixinsight.c...p?topic=11063.0  From what I am gathering here, local normalization is frame adaptation accessed via MLN. Though now that you mention it, I see local nomralization as a new function as well so now Im really :confused:.  I guess I will run with the tutorials for now! 



#8 rockstarbill

rockstarbill

    Mercury-Atlas

  • *****
  • Posts: 2664
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 13 August 2017 - 11:48 AM

 

That tool has been around. The new one is called Local Normalization. I have been using it the same way David mentioned in his presentation on YouTube, although I do not DBE the reference image at all, as some other folks have mentioned.

Bill, see the last thread of this link https://pixinsight.c...p?topic=11063.0  From what I am gathering here, local normalization is frame adaptation accessed via MLN. Though now that you mention it, I see local nomralization as a new function as well so now Im really confused1.gif.  I guess I will run with the tutorials for now! 

 

Near the end of that thread, you will read that it was named Local Normalization instead. Basically you process your data as you normally would, then after you register your data, you run the new tool, feed it your registered frames and a reference frame, and it will output _n XML files. Then when you run integration you provide those files, and select to use Local Normalization in the Image integration tool. There are two places to set this:

Attached Thumbnails

  • IntWNorml.JPG

  • calypsob likes this

#9 calypsob

calypsob

    Mercury-Atlas

  • *****
  • topic starter
  • Posts: 2917
  • Joined: 20 Apr 2013
  • Loc: Roanoke, Virginia

Posted 13 August 2017 - 01:22 PM

 

 

That tool has been around. The new one is called Local Normalization. I have been using it the same way David mentioned in his presentation on YouTube, although I do not DBE the reference image at all, as some other folks have mentioned.

Bill, see the last thread of this link https://pixinsight.c...p?topic=11063.0  From what I am gathering here, local normalization is frame adaptation accessed via MLN. Though now that you mention it, I see local nomralization as a new function as well so now Im really confused1.gif.  I guess I will run with the tutorials for now! 

 

Near the end of that thread, you will read that it was named Local Normalization instead. Basically you process your data as you normally would, then after you register your data, you run the new tool, feed it your registered frames and a reference frame, and it will output _n XML files. Then when you run integration you provide those files, and select to use Local Normalization in the Image integration tool. There are two places to set this:

 

excellent, thank you ! 



#10 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3411
  • Joined: 25 Sep 2010
  • Loc: Georgetown, TX

Posted 13 August 2017 - 01:30 PM

That tool has been around. The new one is called Local Normalization. I have been using it the same way David mentioned in his presentation on YouTube, although I do not DBE the reference image at all, as some other folks have mentioned.

It isn't actually important if you run DBE on the reference frame or not.  The goal of LN is to normalize all intensities in a given frame to the reference image on a per pixel basis.  This means if you had two different images with different gradients it should make the target image's gradient appear just like the reference frames.  In terms of the effectiveness of rejection and appropriate weighting it makes no difference if the reference frame had a gradient or not.

 

I've tried it with quite a bit of data doing three tests:

  • ABE/DBE on a single reference subframe
  • Using the best subframe as reference for LN w/o ABE/DBE and doing ABE/DBE after ImageIntegration
  • Doing a two pass stack, the first pass to generate a reference frame that is rotated and cropped the way I want and has ABE and/or DBE applied then using that as a reference for StarAlignment/LN for the second pass

I've seen some VERY minor improvements from the last option.  The first two seem to be almost identical.

 

In either case, Iv'e found that when using LocalNormalization ABE and DBE have become easier, requiring fewer passes and samples as the gradients in the final stack or reference frame is less complex.

 

Regards,

David


  • Jon Rista likes this

#11 rockstarbill

rockstarbill

    Mercury-Atlas

  • *****
  • Posts: 2664
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 13 August 2017 - 01:34 PM

 

That tool has been around. The new one is called Local Normalization. I have been using it the same way David mentioned in his presentation on YouTube, although I do not DBE the reference image at all, as some other folks have mentioned.

It isn't actually important if you run DBE on the reference frame or not.  The goal of LN is to normalize all intensities in a given frame to the reference image on a per pixel basis.  This means if you had two different images with different gradients it should make the target image's gradient appear just like the reference frames.  In terms of the effectiveness of rejection and appropriate weighting it makes no difference if the reference frame had a gradient or not.

 

I've tried it with quite a bit of data doing three tests:

  • ABE/DBE on a single reference subframe
  • Using the best subframe as reference for LN w/o ABE/DBE and doing ABE/DBE after ImageIntegration
  • Doing a two pass stack, the first pass to generate a reference frame that is rotated and cropped the way I want and has ABE and/or DBE applied then using that as a reference for StarAlignment/LN for the second pass

I've seen some VERY minor improvements from the last option.  The first two seem to be almost identical.

 

In either case, Iv'e found that when using LocalNormalization ABE and DBE have become easier, requiring fewer passes and samples as the gradients in the final stack or reference frame is less complex.

 

Regards,

David

 

Hi David,

 

I have also found that running DBE/ABE has been a much simpler and cleaner process after using Local Normalization. I usually only have to do one pass on an image and it seems to clean it up nicely. 



#12 Newfie Stargazer

Newfie Stargazer

    Vostok 1

  • *****
  • Posts: 111
  • Joined: 12 Oct 2016

Posted 13 August 2017 - 02:21 PM

David,

 

Watched your video on the new PI tools.  I am new to PixInsight (a couple of weeks now) and have also viewed your other video on processing; it was very helpful indeed.  Just a quick question if I may regarding DBE; could (or maybe should would be a better word) it be run more than once on the same image?

 

Also, regarding running DBE on your reference frame prior to performing LN; after doing so, I understand that you use the DRE image as the reference in the LN tool window, but do you also include the DBE image in the list of target images, or just the registered image that you did the DBE on, or both?

 

Jim



#13 entilza

entilza

    Gemini

  • *****
  • Posts: 3111
  • Joined: 06 Oct 2014
  • Loc: Oakville, ON, Canada

Posted 13 August 2017 - 03:40 PM

David have you tried it with a previous fully integrated image?

Ie: integrate the old way

Then use dbe on that to be used as the reference frame for the localization?

#14 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3411
  • Joined: 25 Sep 2010
  • Loc: Georgetown, TX

Posted 13 August 2017 - 08:47 PM

David,
 
Watched your video on the new PI tools.  I am new to PixInsight (a couple of weeks now) and have also viewed your other video on processing; it was very helpful indeed.  Just a quick question if I may regarding DBE; could (or maybe should would be a better word) it be run more than once on the same image?
 
Also, regarding running DBE on your reference frame prior to performing LN; after doing so, I understand that you use the DRE image as the reference in the LN tool window, but do you also include the DBE image in the list of target images, or just the registered image that you did the DBE on, or both?
 
Jim

I'm glad you found the videos useful Jim.  You can certainly run DBE or ABE multiple times on an image.  I've certainly worked on some images that have taken 2 or 3 pass of ABE and then another 2 or 3 of DBE to the get cleaned up.
 

David have you tried it with a previous fully integrated image?

Ie: integrate the old way

Then use dbe on that to be used as the reference frame for the localization?

I have done that Martin.  Specifically, for a couple tests, I used a a stack from a previous processing effort intercepting after ABE/DBE but before deconvolution, stretching, etc. as a reference for doing a new StarAlignment/LocalNormalization/ImageIntegration pass.  The only thing special I did was linear fit the reference frame to one of the subframes since the fit can be off after some of the weighting, etc. going on in ImageIntegration.  This just made it so that the subframes intensities match up a bit better.  I have no idea if this helped or not as I haven't done an experiment without it, I just felt it might be better to have the reference frame at least roughly similar to the subframe intensities.  The results were similar to doing the 2 pass approach I mentioned above.  That being said, if you have an image where complex gradients are a real problem and you aren't 100% confident of the evenness of the background in that image I would not use it as a LN reference.

 

Regards,
David


  • entilza likes this

#15 BenKolt

BenKolt

    Viking 1

  • -----
  • Posts: 892
  • Joined: 13 Mar 2013

Posted 15 August 2017 - 01:08 PM

All:

 

I was just introduced to LocalNormalization through another thread by rockstarbill and was directed here.  I have watched your video as well, David, and found it to be of great use.  I've started testing LN on some of my recent images and have a few questions.

 

  • Prior to using LN, I normally followed the route of using SubframeSelector and calculating a weight number for each frame that was then passed along to ImageIntegration after registration via a header keyword.  This was the way introduced in one of the Light Vortex Astronomy tutorials.  It appears to me that LN is a separate procedure from all that and I can not mix the two procedures.  Is that correct?  Any thoughts on how one may be better than the other?
     
  • I also tend to usually drizzle my integration.  I create drizzle files through the registration step, which are then updated through the ImageIntegration step, and then I apply DrizzleIntegration.  I assume that I can still do all this and that the drizzle files information contains the results of the LN.  Is that correct?
     
  • Finally, what are the thoughts on mixing LN with drizzle integration?  Are these separate but compatible steps, or should they be considered overlapping enough that one should do one or the other, but not both?  My thinking is that there's nothing fundamentally wrong with applying both, but I'm curious what others think or have discovered.

Thanks in advance for your attention to these questions.  I look forward to hearing what others are doing with this new tool and what the results look like.

 

Best Regards,

Ben



#16 AstroPics

AstroPics

    Vostok 1

  • -----
  • Posts: 190
  • Joined: 11 Jan 2017
  • Loc: Atlanta, GA

Posted 15 August 2017 - 03:11 PM

All:

 

I was just introduced to LocalNormalization through another thread by rockstarbill and was directed here.  I have watched your video as well, David, and found it to be of great use.  I've started testing LN on some of my recent images and have a few questions.

 

  • Prior to using LN, I normally followed the route of using SubframeSelector and calculating a weight number for each frame that was then passed along to ImageIntegration after registration via a header keyword.  This was the way introduced in one of the Light Vortex Astronomy tutorials.  It appears to me that LN is a separate procedure from all that and I can not mix the two procedures.  Is that correct?  Any thoughts on how one may be better than the other?
     
  • I also tend to usually drizzle my integration.  I create drizzle files through the registration step, which are then updated through the ImageIntegration step, and then I apply DrizzleIntegration.  I assume that I can still do all this and that the drizzle files information contains the results of the LN.  Is that correct?
     
  • Finally, what are the thoughts on mixing LN with drizzle integration?  Are these separate but compatible steps, or should they be considered overlapping enough that one should do one or the other, but not both?  My thinking is that there's nothing fundamentally wrong with applying both, but I'm curious what others think or have discovered.

Thanks in advance for your attention to these questions.  I look forward to hearing what others are doing with this new tool and what the results look like.

 

Best Regards,

Ben

When you perform a drizzle integration, you need to add your local normalization files (there is a new button to add the local normalization files plus the standard drizzle files). I have tried it and it works well if you have a good candidate image for drizzling.



#17 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3411
  • Joined: 25 Sep 2010
  • Loc: Georgetown, TX

Posted 15 August 2017 - 05:15 PM

Hi Ben,

As AstroPics already stated the normalization files can be used as input to updated DrizzleIntegration tool.  They are designed to work together so they are definitely compatible.

Your question about SubframeSelector is more complex.  This is because SFS is using the raw data which has not been normalized.  Until the new LN tool was introduced I was doing my own normalization pass on the data prior to SFS but this was a whole image normalization and not per pixel (using my dnaLinearFit script).  The goal here was to normalize the data so it was more comparable, which is really what LN is doing for ImageIntegration and DrizzleIntegration.  This doesn't really make the SubframeSelector weighting any worst than it was before, but it certainly mitigates its usefulness when compared to the per pixel capabilities that ImageIntegration now has thanks to LocalNormalization.

The ideal situation is that the SFS tool gets updated to handle the normalization files so that you get the absolute best comparison of data and can tune the weightings to what you want with all the capabilities of per-pixel normalization.

Regards,
David


  • BenKolt and rockstarbill like this

#18 BenKolt

BenKolt

    Viking 1

  • -----
  • Posts: 892
  • Joined: 13 Mar 2013

Posted 16 August 2017 - 01:10 PM

Hi Ben,

As AstroPics already stated the normalization files can be used as input to updated DrizzleIntegration tool.  They are designed to work together so they are definitely compatible.

Your question about SubframeSelector is more complex.  This is because SFS is using the raw data which has not been normalized.  Until the new LN tool was introduced I was doing my own normalization pass on the data prior to SFS but this was a whole image normalization and not per pixel (using my dnaLinearFit script).  The goal here was to normalize the data so it was more comparable, which is really what LN is doing for ImageIntegration and DrizzleIntegration.  This doesn't really make the SubframeSelector weighting any worst than it was before, but it certainly mitigates its usefulness when compared to the per pixel capabilities that ImageIntegration now has thanks to LocalNormalization.

The ideal situation is that the SFS tool gets updated to handle the normalization files so that you get the absolute best comparison of data and can tune the weightings to what you want with all the capabilities of per-pixel normalization.

Regards,
David

 

David,

 

I messed around with some of my images last night and I see what you are saying.  I answered my questions in part: one can in principle mix SS and LN together, although if SS is done first, the weight figure is invalidated through LN.  Since LN is done pixel by pixel, it must be done following registration.  I suppose one could then follow this with SS and calculate those weights, but I'd be concerned of the results, particularly if there are black clipped edges along some of the frames following registration.  That implies the need to crop all the subframes.  So, I suppose this is a possible, albeit difficult and highly questionable procedure:

 

  1. Calibrate
  2. SubframeSelector Pass 1 for the purpose of removing outliers, but don't save output frames
  3. Registration Pass 1, but don't create drizzle files
  4. Crop all registered frames
  5. LocalNormalization
  6. SubframeSelector Pass 2 - now save the weighting as keyword
  7. Registration Pass 2 - now save the drizzle files - should not shift the files this time?
  8. ImageIntegration - I don't know this for sure, but registered files, normalized files and drizzled files should all be aligned
  9. DrizzleIntegration

 

Well, now that I've written it down, it looks horrible, and I'm not going to do it.  And I'm not sure that the second pass of SubframeSelector would gain you much anyway at this point.

 

My plans for now are not to mix the SS and LN procedures in the first place unless the developers offer some modifications.

 

Best Regards,

Ben


  • David Ault likes this

#19 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3411
  • Joined: 25 Sep 2010
  • Loc: Georgetown, TX

Posted 17 August 2017 - 09:10 AM

Actually what you wrote down is highly similar to the process I had been using prior to LN.

  1. ImageCalibration
  2. Blink for first pass image tossing and selecting reference frame
  3. OPTIONAL: ABE in 2 passes.  1st pass has function degree of one, 2nd pass has function degree of two, tested on a few subframes, usually using highly restrictive sample selection parameters.  1st pass is subtractive and 2nd pass division both with normalization on.
  4. OPTIONAL: dnaBatchLinearFit using reference frame picked from Blink pass above.  I monitor to see if data is being clipped and may pick another reference frame or modify the reference frame to prevent that.  I do this if there is a lot of variance in the intensities and SNR of the data.
  5. StarAlignment
  6. ImageIntegration
  7. Crop + Rotate for framing
  8. StarAlignment - pass 2 using the cropped and rotated frame as a reference
  9. SubframeSelector - On registered data for setting weights (I tend to weight more heavily towards FWHM to focus on the best resolution) and further image tossing (I rarely toss additional frames at this point).
  10. ImageIntegration - pass2.  If I have done ABE and linear fitting I disable the normalization through ImageIntegration and typically use one of the standard sigma rejections without any scaling (since the data is normalized already).

I will say that it is rare when I felt the need to go through that full process and it was usually when dealing with data from multiple sites and/or different equipment setups.  You can also see that I had been attempting to normalize data on a per-pixel basis with the combination of ABE and dnaBatchLinearFit when I felt it would be beneficial to do so.

 

The one advantage this has over the new process is that SFS is still useful for weighing more heavily towards FWHM, eccentricity or whatever you want (knowing that you are trading that for SNR).  Of course, LN's per-pixel fitting is likely more accurate than this process and there will be no clipping of data at any point in the pipeline (since there is no image modification, just writing of normalization files).

 

Regards,

David



#20 scopenitout

scopenitout

    Apollo

  • -----
  • Posts: 1360
  • Joined: 24 Aug 2013
  • Loc: Mt. Belzoni

Posted 17 August 2017 - 11:10 AM

Useful summary.
Thank you, David.


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.







Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics