I would love to! I was just thinking of starting a "Lunar Imaging Tips and Tricks" thread, in fact. But I'll put the stuff related to these photos here.
So the first photo, taken during the daytime, is interesting. As I said, bad seeing, difficult to focus, broad daylight (~1pm), so bright blue sky. During capture I reduced my ROI down to 352x352 and put the square on the illuminated limb and used auto white balance, but this resulted in way too much red and way too little blue in the resulting image. It would have required a lot of fiddling to color correct, so I decided to go monochrome and only process the green channel. Here is a trick to remove the blue sky and preserve the color depth: stack to FITS format and then open the image in GIMP. I originally did this because it's a convenient way to isolate and export the individual channels (for channel by channel deconvolution when you have excellent seeing that supports it) and I was doing this to export the green channel. But then I thought about it and realized that in GIMP the FITS file retains its full 32 bits per channel bit depth. So I used the Curves adjustment to subtract out the sky from the bottom end and set the white point to be appropriate for downstream sharpening (around 75%), and the exported to 16 bit gray png. Voila, plenty of bit depth for downstream processing. If you don't do it this way, you run a very strong risk of posterization when you try to bring the sky down to black.
After that, all of my processing other than final conversion to jpg for posting on CN is done in Astra Image. I no longer use wavelets in Registax and haven't looked back; way too many parameters, way too complex and abstruse, not based on the physics of what happens to the image between the target and the sensor.
The very first thing I do before anything else is deconvolution. I use the Simple Deconvolution tool, Lucy-Richardson, Gaussian kernel, iterations 10, strength 5. These settings essentially never change. Don't turn on Suppress Halos, don't turn on Aggressive. I then adjust the radius and check the preview. I want to find the maximum radius I can get away with that doesn't look over-sharpened. Over-sharpened is that blown out low quality video look where you have clipped the whites and blacks, particularly at sharp edges. One of the key locations is at the edge of the illuminated limb. If you can see ringing at the edge, particularly clipped black just outside the limb, that is too much.
A little digression about Gaussian deconvolution. If you have perfect seeing and your scope is well collimated, to first order the Airy disk diffraction pattern is approximately a Gaussian distribution. The equivalent Gaussian radius of the Airy disk in pixels is approximately given by:
0.42 x (wavelength in nm) x (focal ratio) / (pixel size in microns) / 1000
If you are at critical sampling this simplifies to:
2.1 x (wavelength in nm) / 1000
So for the green channel, where I generally take the average wavelength to be 540 nm, this works out to be 1.13 pixels. If I were critically sampled and had excellent seeing, I would deconvolve channel by channel with the appropriate radii and I am convinced it makes a visible difference in color photographs, particularly where the saturation is turned up. For what it's worth I do not yet have an ADC (Atmospheric Dispersion Corrector). I have had some success in reducing atmospheric dispersion in post, but I haven't really "perfected" (wink wink, nudge nudge) this yet.
However, you rarely have excellent seeing, so usually the relevant Gaussian radius is not that of the Airy disk, but rather the seeing limited radius, which will be larger. A little digression within the digression. When you stack a lower number of frames, the seeing limited radius is actually smaller than when you stack more frames. This is because the effect of atmospheric turbulence over short time frames is actually more like bulk distortion than it is Gaussian blurring. Autostakkert finds the sharpest percentage of Alignment Points and averages them, and also skews and stretches slightly them to better improve alignment to each other and the reference frame. Theoretically the sharpest image would be formed by literally stacking 1 frame, with each Alignment Point using the sharpest frame for that point. In practice this doesn't work because it's very difficult to make individual (unaveraged) APs align because of noise. That's what averaging does; it reduces noise. The noise to signal ratio falls with the square root of the number of frames stacked. So you require typically some minimum number of frames to average to avoid stacking artifacts, unless you are very undersampled. In any event, presuming you stack enough frames to avoid stacking artifacts due to noise, you will find that fewer frames stacked results in a sharper but noisier image, while more frames stacked results in a blurrier but less noisy image. When I say blurrier, I mean that literally; to first order the averaging process is literally Gaussian convolution, and that's why Gaussian de-convolution is the most physically realistic option and gives the most realistic results in my opinion.
Anyway, I suppose you don't need to know any of this. The end result is that you need to just play with the Gaussian deconvolution radius until you find the value (or values) that you find pleasing. I typically find the maximum radius above which the result starts to look over-sharpened. I then decide if the resulting image supports the capture resolution. If not, I will resize by 50%. I don't generally futz around with downsizing by 90% or 70% or whatever. I want a sharp looking result,and if full resolution doesn't support it, I will go straight to 50%. And if that doesn't support it (either from bad seeing or because my focus was out) I will go to 25%. I always preview deconvolution but then only apply at what I judge will be the final image size. Remember if you resize by 50%, you need to cut your kernel radius by 50% as well. Sometimes you can get a *little* more aggressive with the radius at the reduced size, but not much.
After deconvolution, I will typically use a Convolution Filter for the final sharpening to make the details pop. I use the Mild or Strong sharpening filters, but I adjust the coefficients because if you have done your job well during deconvolution, the default coefficients will be way too high. I bring down all of the coefficients until the preview looks as sharp as possible but not over-sharpened. Remember, the way these sharpening convolution filters work is that they increase the weight of the central pixel and use negative weight for the surrounding pixels, to increase the pixel to pixel contrast. So the sum of all the coefficients should be 1, regardless of what your individual values are. When I next have a night with perfect seeing, I am going to attempt to calculate a set of weights for the actual Airy disk, diffraction rings and all, and see if I can improve on Gaussian deconvolution, with the goal of virtually eliminating ringing in the resulting image.
Next I will use Denoise. I'm not particularly that happy with the Astra Image Denoise tool, to be frank. It's nice that it separates luma and chroma noise, but the algorithm can sometimes produce artifacts that look worse than the noise in the first place. So I will typically under denoise, preferring some grain to weird looking artifacts and loss of detail. Besides, I kind of like the look of some grain; gives it a film kind of a vibe.
The very last thing I do is color correction and setting levels. The first thing I deal with is visible black clipping. I will raise the black point the minimum amount required to eliminate the visible black clipping, and then (this is important) I will move the mid-tones marker back down to where it was to keep the overall levels, then Apply Permanently. I will bring the white point down so that the smallest highlights are brilliant white but not clipping. These images are a little dark for my taste now that I look at them; the smallest highlights are not bright white. That was an oversight. In any event, the last thing I do is adjust the gamma, either via gamma directly or Curves, to brighten up the whole image without blowing out the highlights.
I'm sure Tom Glenn's or aeroman's or others' processes would blow mine out of the water, but I try to keep the budget, in terms of software cost and processing time, to a minimum and follow the KISS principle.
That was long-winded.
Edited by Borodog, 17 April 2021 - 04:13 PM.