Its okay, Rob. You can dump on me - as long as its in the spirit of trying to put out correct information. Your perspective is always helpful. Certainly, this process should not be a default workflow step.
I understand that "you would get the same result loading everything into Blink and asking for a per-image STF" but the point was that it was not doing that - that it was the same STF applied to all images in both gifs. Doesn't that mean that the images were successfully normalized to each other? It may not have been too special, but at least it worked...? For me, it did seem to improve the final SNR of the master.
And, while we are discussing things, "normalization always has to happen behind the scenes in ImageIntegration or else you could not properly do pixel rejection" - yes, but isn't this for pixel rejections calculations only? II is not actually changing the images to make them more normalized before stacking, is it? Is so, it would seem to be in conflict with the weighting algorithm (why weight the images if they are all going to be normalized?). It is certainly possible that I do not understand things fully enough.
I have no choice but to image on less then perfect nights, resulting in wide ranges in background levels and gradients. For me, this process has proven helpful in most cases. I understand that many people find that it introduces artifacts. Maybe their data is just not bad enough.
yes, the images were successfully normalized to one another. the STF/blink thing was just an expedient example. probably using ImageContainer with LinearFit would be more like the LN process in the sense that it will write out normalized images to disk which then in theory would work OK with the same STF, or very nearly so. i think the point is that we've never had to manually normalize because II just does it internally. i think since LN is complex enough, Juan decided to make it its own process and pass the results to II thru sidecar files rather than putting a bunch of new controls in II itself. but in spirit it's part of the integration task.
i just read the source for ImageIntegration and while it's a little hard to follow (C++ can be a bit obtuse), i think it's the normalized pixels that are stacked (after rejection). i suppose that makes sense, since you've already gone thru the trouble of normalizing them.
as far as weighting is concerned, by default if you are weighting by SNR for instance, normalization is not going to change the SNR of a frame. so it still makes sense to do weighting whether or not you normalize or not.
i also have terrible skies/gradients (and apparently internal reflections in my refractor off of the flattener) and recently decided to try LN using a stacked and carefully DBE'd frame as the reference. while the background of the LN'd integration looked great, i could tell that the subject (M101) had been changed by the LN routine - the lanes of the galaxy were darker (more contrast) than they were in the non-LN image. and then when i did all 3 channels and combined them (with unique references in LN of course), the galaxy color was all weird even after PCC. doing LinearFit to one channel's LN and non-LN frames and then dividing the two, i could see that there were a lot of differences in the galaxy itself. at that point i got tired of messing with LN settings and decided to just go the traditional route. maybe there was a set of LN settings that would have done the right thing, i don't know.
things might be different in a "busier" image? all i know is that on these widefield galaxy images, LN has been very hard for me to perfect.