Hi,
I need to work on improving my image processing skills and have often wondered about what other users of PixInsight have once they get an integrated image fresh from the ImageIntegration Process. I tend to not expose deeply enough and end up stretching my images until they break; then backing off a little. It seems that is probably not the best way to approach things.
As a case in point, I am working with some OSC data gathered the other night. The target is a 13th magnitude galaxy and friends. I have 3 hours of data taken from skies measuring 17.75 mpass -- the result of shooting in town with lots of light pollution and a roughly first quarter moon in the sky about 35° away from the target. Once I calibrate, DeBayer, assign weights, align, and integrate the captured frames, I am left wondering if others see similar messes. I am planning to gather about 12 to 15+ hours total on this object but this same set of questions arise for every target I image, it seems.
Specifically: what do your own "typical images" look like coming right out of ImageIntegration?
- How "flat" are your images?
Do you see any residual vignetting or complex gradients when you really stretch them hard in STF?
How many ADU units separate the lightest and darkest areas of background signal? (This is easiest to judge on galaxy images, I think. My recent image shows a difference of 19-20 ADU in background intensities on a base of 159 ADU.) Some of that was a linear light pollution gradient and some of it looked like residual vignetting.
- How faint are the structures you wish to later make clearly visible in the final finished image?
Again, how many ADU units separate the faint portions of your linear image from the background? (My recent in-process galaxy shot shows only 3 ADU between the background and the arms of the galaxy right out of ImageIntegration. For reference, on my system, 3 ADU is much less than one electron of signal -- about 0.3e-, actually.)
- Do you routinely need to make more than two passes through DBE to completely flatten your images?
When do you consider the image "flat" enough to move on with other processing? My galaxy image background varies by about 1.46 ADU after flattening. It took two passes through DBE and one touch-up pass of ABE to get there. My median background at this point is about 157 ADU.
So, while there may not be a "typical" set of values for others, what do your results look like as you finish the image integration stage but before you start with the rest of your linear processing?
This is what I see with only three hours of data. I hope it will bloom into something worthwhile after multiple imaging sessions. However, I never seem to be satisfied and always try to get more from that data than it wants to yield. Does everyone feel this way?
John