I have been simply selecting the subframe with the best SNR as the reference for Local Normalization in PixInsight. After some additional reading, I am starting to question that. If I understand the actual purpose of local normalization, it is intended to simplify gradients and balance exposures.
Case and point. From my last imaging session, my L filter had very strong light pollution at the beginning of the series. This was because I had started far from the meridian. As the night progressed, my final L frame had the weakest contribution from light pollution (this was very close to the meridian). BUT, using SubFrame Selector, it turns out my first subframe had the best SNR.
So, Local Normalization is intended to 'balance' subframes. It seems I would actually want to use the last L subframe in my example above? Am I correct in assuming that all the subframes will be normalized to match the exposure and light gradient of the reference frame? How do people select the optimal Local Normalization subframe?
Also, has anyone played with the normalization scale? Does reducing this value make it work better at dealing with smaller gradient structures? My interest here is not so much on light pollution gradients but blobby background noise artifacts, potentially due to lack of dithering.
Lastly, there are two normalization options in Image Integration: one for pixel rejection and one for combination. I have been using both but wonder if there are any pros/cons to this for pixel rejection?