This is a bit of a long analysis, my apologies if it exceeds your patience...
I've been out imaging Jupiter three times over the last 3 weeks, each time doing 10-20 captures, and I've had remarkable stable results each night.
I've played with image scale (From ~0.13" to 0.25"), improved my focusing and scope stability, played with biasing focusing just a tiny touch one way or another, optimized frame rates on FireCapture, started to fine tune my ADC use, and played with a zillion options optimizing enhancements in stacking and sharpening.
But the results come back very very similar, as if they are capped for the maximum quality image I get in an evening. I can sometimes get a bad capture (poor focus, haze or cloud comes over, or seeing just tanks for a couple of minutes, etc....) but 80% of my captures, after processing, come out almost exactly the same. So that makes me wonder:
1) Either I'm seeing limited and seeing happened to be pretty much the same all three evenings and generally very consistent within an evening (minus a cloud or two).
2) Or my 12" Dob has a pretty hard limit (ex: Collimation or other optical issue). For instance, I've been using a collimation cap and only some minimal star tests so I know it's just basically collimated and could be better.
3) Or my technique for simulating poor seeing is... um...
So I did an experiment. If my scope isn't the issue, than if I artificially degrade seeing from my best captures, I should immediately see a clear difference. If my scope IS the limit then I shouldn't lose much image quality because there are tons of frames that are just scope limited.
So I did it in three ways for a good 3 minute ~16,000 frame capture of Jupiter at 0.13" resolution:
1) I used PIPP to take only every other, third, fourth, or even fifth frame. I then stacked 2700. The theory is frames are tossed out consistently and at 1/3, I only have 1/3 the number of the best frames, so I'm clearly tossing lots of more mediocre ones in there.
2) I looked at Autostakkert's analysis and then used PIPP to take a segment the of video that had the worst seeing. Almost all the frames were rated by Autostakkert to be from 5% to 60%, averaging about 40%
3) I had PIPP Sort the frames and then stored only about 3200 of the worst frames.
I stacked 2700 of these using the same options and alignment points.
I used Registax and used the same routine/parameters, but I noticed on a couple i needed to adjust just a tad to try to get the equivalent balance of sharpness/noise. In general, I tried to get the best out of each, striving for a reasonable amount of punch to see what popped out. I didn't take forever to make them equivalent, just looking for the big picture.
In looking at the videos, I could see differences, they weren't dramatic, but I expected a clear difference in result.
Here are the results:
The original File of 16000 frames with the best 2700 stacked:
Taking every third capture, so only 1/3 the amount of frames to choose the best from:
With only 1/5 the number of frames, the quality then finally started to roll off (not pictured but similar to the next two).
Now using the worst 3200 frame continues sequence of frames as reported by Autostakkert's analysis:
And finally, the worst frames as reported by PIPP (with 80% under 50% and average about 35%)
So not a huge difference. What does your judgement and experience tell you, that I have some pretty hard limit in my imaging system?
Other than just getting a selection of alignment tools and whacking at it, is there a way to diagnose what the cause may be, if there is a cause other than seeing? I feel like an auto mechanic and I'm going to be tuning things, changing out things, and I'm not even sure if the car is the issue.
For the record, I have a FarPoint 2" Cheshire on order so I'm starting to up my collimation tool set and will work on my procedure. As a blind mechanic, I'm guessing this is one potential culprit... and in any case even if not now iIm told it will quickly become the limiter.
Edited by smiller, 25 November 2021 - 09:02 PM.