I've found you can extract more usable data if you do derotation of the video stream in WinJupos and let it split the data into three separate color channels.
While I don't have a side-by-side comparison, a derotation of a 3 minute video of Jupiter at a 4000mm FL and 3.75 micron pixel sensor does improve sharpness of features. The three color channels that are produced can be processed independently in AutoStakkert, and combined in photoshop. This approach also seems to reduce noise slightly, allowing for slightly more aggressive wavelet or deconvolution settings.
I've also found a quirk with AutoStakkert where varying the size of alignment points actually produces a slightly different noise pattern. Depending on the scale of your image, you can process the same video stream 6-7 times with different sized alignment points. The level of detail will be the same, but the grain will vary. Varying grain is good, because you can go into Photoshop and used Median Stack Mode to average out that grain, smoothing out the image.
So my workflow for Jupiter is typically this:
* 3 x 180s videos
* Align and crop tightly in PIPP to speed up further processing
* Derotate each in WinJupos, creating separate RGB color channels for each stream (9 streams total)
* Process each stream a total of 6-7 times in AutoStakkert, varying the size of the APs. Produces anywhere from 54 to 63 stills.
* For each color from each stream, put all of the 6-7 stills into layers in a single image, and then use Median Stack Mode to average out the noise. Now back down to 9 stills.
* For each set of colors from a given stream, use Photoshop to recombine into RGB. Now down to 3 stills (one for each stream).
* Go back into WinJupos and combine/derotate the three stills to further reduce noise.
* Do some wavelet and deconv in AstraImage (which I've found to work MUCH better than wavelets in Registax for my data).
* Some final tweaks to color, noise reduction, and unsharp masking in Photoshop.
It's a LOT of processing, and probably unnecessary to go to such lengths if you live under better air, but this is the only way I even remotely extract decent data out of the short exposures and high gain I need to compensate for my bad air.
Example of this process:
That was pulled out of maybe Pickering 5-6 seeing using an 8" LX90 and ASI224MC.
Without doing the RGB split from video de-rotation in WinJupos, and without doing several passes in AutoStakkert to get randomized noise per stream, the noise that would be present in this image would be substantially worse than it is.
Edited by CrazyPanda, 17 May 2019 - 10:46 PM.