There is an easy experiment to do that will put the "oh but digital processing can cover for optical defects" myth to bed:
Pick a night of good-to-excellent seeing, take an AVI of a planet. Deliberately miscollimate the scope till the image in the eyepiece is fuzzier. Take another AVI. Process both to the max and compare.
Any takers? Especially in the southern hemisphere where the ecliptic with the planets is high in the sky currently.
Been there, done that ... but not deliberately!
As per my earlier post, I learned the hard way that even the "slightly defocused star" method was not accurate enough to extract the best performance from my SCT. With the help of the fantastic members on this forum, I learned about the high-power, in focus star test to precisely dial in collimation and have been using that ever since.
Disclaimer: these images are taken with a 6" SCT, not a 9.25" ... so don't take them as an indication of what the 9.25" can do.
Also not actually on the same night, but I live near the equator so I have the advantage of having the planets high in the sky all year 'round. Seeing is usually decent (3/5+) ... transparency is more often the issue, with so much moisture in the air.
With 'scope slightly miscollimated (notice in particular what appears to be "ghosting" of the planet limb, almost like image shudder):
After dialling in collimation using a high-power star test:
What is this supposed to prove?
Wouldn't it make more sense to process the miscollimated AVI to the Max and compare that to a not so heavily processed collimated AVI?
I would agree with this statement: with good collimation and good seeing, less processing is needed in the first place. But there's only so far you can push any data set - processing can't recover inherently poor data or lack of detail, and over-processing becomes pretty obvious.
Another factor of course is getting focus spot-on when imaging ... took some practice for me!
And finally, for those trying to compare/equate "viewing" to "imaging" in the first place: I fully agree with prior posts that it's actually hard to get an image to compare to what my eyes can see through the eyepiece.
Important to remember that our eyes (and, more importantly, our brains) can do things that no telescope (or camera) can do:
- Rapidly adjusting focus and providing a much larger apparent depth of field.
- Dealing with high dynamic ranges of light intensity.
- etc. etc.
Not to mention our brain's ability to "fill in the gaps" based on what is has learned to expect it should, as evidenced by countless optical illusions (such as this one).
So at the end of the day, you could in fact ask: is the image captured by an electronic sensor (even after processing) in fact more faithful/dependable than what you see with your eyes?