has anyone noticed that there doesn't seem to be much, if any, difference whether the object (galaxy or nebulae) is imaged for 30 minutes or 60?
I have, on several occasions, noticed that as the imaging progresses it doesn't seem to be showing any further detail.
I suspect that each 10 second image is being averaged, which would account for it.
However, further imaging with FITS and dark frames would no doubt give a much more detailed result.
Or am I completely wrong?
I will try M33 tonight at 30 minutes and then 60, and post results tomorrow, if it keeps clear.
I no longer look much at the JPEG images coming from Stellina. As their stacking process is cumulative, it indeed puts much more weight on the first images than on the last. It might very well be that after some time you don't see much of a difference. Note that if there was any, it could probably not be seen on the screen of a tablet or a smartphone.
A good illustration of that is that if something happen in the first few images (vibration, satellite trail), you'll see it on the real-time image and it will only slowly disappear as more subs are added. However, is the same thing happens later, you'll never see it.
If your stack the FITS yourself (in which case every sub is given the same weight), I can guarantee your that 60 minutes is better than 30, and 120 better than 60. I have gone as far as 4 or 5 hours. Note that you must double the exposure time in order to perceive a difference.
Edited by CptNautilus, 13 November 2021 - 05:34 PM.