Beautiful pics, and thank you for sharing your expertise with us.
You have mentioned several post processing steps like sharpening, deconvolution, Gausian blur and denoise. Can you talk about why and how you use these techniques and what software you use to realize them?
Thanks for your comments. Below, I will expand on a few of my thoughts regarding image processing, with specific attention to the techniques you asked about. This will be a general discussion about the principles, but when I have time later I will add a few examples of screen shots to describe what I do to an image to make it more practical. Many of these terms have mathematical definitions, and although you don't actually need to know much about the details in order to use the tools effectively on your images, I find that it is quite interesting and informative to know a bit about the background of the techniques that are used. The Wikipedia pages for these terms do a pretty good job of summarizing, and in fact you can skip most of the mathematical details and just try to get an overview (usually you can just read the first few sentences and that is sufficient for a basic understanding). Some terms that relate to image processing, as well as many other areas of data analysis, include the following (all are clickable links):
Convolution is a mathematical operation in which one function modifies another function to arrive at a third. Why is this relevant to imaging? Because when light passes through Earth's atmosphere, it undergoes a convolution operation before it reaches your camera sensor. So the data we collect with our cameras has been distorted (convolved) by the atmosphere. If we could have some way of knowing what the mathematical operation was that distorted the data, we could undo it. This is deconvolution. In really good seeing conditions, the atmospheric distortion can be approximated with a Gaussian function. This is simply a mathematical function with a certain shape that is often used to describe data that adopts a normal distribution (which is shaped like a bell curve). If you apply the appropriate Gaussian deconvolution equation to data that has been convolved with a reciprocal Gaussian function, you can uncover the original signal (before atmospheric distortion). The important concept here though is that the distortion to the data must be capable of being accurately mathematically modeled, and this is only possible if the seeing conditions are good, because in those cases the convolution was Gaussian. In bad seeing, the convolution is pure chaos and cannot be accurately modeled. In a nutshell, this is the theory behind image processing. As an aside, adaptive optics platforms (which are used by professional observatories) attempt to cancel the atmospheric convolution effects before the light even reaches the camera sensor by using deformable lenses, but these also only work in good seeing conditions, because the distortions can only be accurately mathematically modeled under good conditions.
Recently, with my Moon images I have been using AstaImage for deconvolution. Wavelet sharpening is another form of deconvolution. Wavelets are another way of modeling mathematical distortions to a set of data, and programs like Registax (or any other that uses wavelets) will sharpen an image, and are frequently used in the imaging community to try and undo the effects of atmospheric seeing. My experience is that deconvolution and wavelets can yield very similar results, but everyone has preferences on which program to use and what modifications to make. It takes some time to figure out your individual workflow. With lunar images, the modifications are generally very simple. As seen in my previous post above, my image of Clavius looks pretty good even with only one single frame. If you don't have good seeing conditions, you will not be able to gain much from deconvolution. I have been using Lucy-Richardson deconvolution in AstraImage, but wavelet sharpening in Registax works very well, and the only reason I'm not using it on my lunar images now is that it is much slower with my larger files from the ASI183 and it has annoyed me! Most deconvolution programs will require you to enter two variables, one corresponding to the overall strength of the operation, and the other defining the mathematical function. The later is usually referred to as a "point size" or "blur kernel size" and typically specifies the radius (in pixels) on which you want to apply a deconvolution equation. It is fun to play around with these variables on your images, and you can quickly see how each affects the outcome.
Gaussian blur is actually a form of convolution, in which you are taking your image and applying a Gaussian function to it to create a blur. Why would you want to apply a convolution operation to your data after you went to the trouble of sharpening it through deconvolution? The answer is because often times, any form of sharpening (whether deconvolution, wavelets, or other) leads to an image that doesn't look natural. When first starting out in imaging, it is very common to over-sharpen your images and not realize you are doing so. With the Moon, you can look at images taken from other experienced imagers (or images from orbiting spacecraft) and compare to your own image. Classic signs of over-shapening include the enlargement of small details beyond their actual size, or an unnatural looking contrast in the final image. An analogy from planetary imaging is with the planet Saturn, in which classic signs of over-sharpening include a hugely expanded Cassini division that appears much larger and darker than it really is, as well as spurious ring divisions that don't actually exist. A similar thing happens with small rilles and craters on the Moon if over processed. Edges and contrast get exaggerated. If the amount of over-sharpening is minor, then adding a slight Gaussian blur can smooth out the image and make it look more natural, although if it has been severely over-sharpened, a Gaussian blur will not correct this and you have to go back to the beginning and redo the deconvolution in a less aggressive manner.
Denoising in general simply refers to any mechanism for reducing noise in an image. Gaussian blur is one form, but there are others. The reason I like Gaussian blur is that it applies a very aesthetic blur. Other forms of denoising can also be useful, but you have to be careful because they can sometimes lead to a very unnatural looking result. All denoising algorithms try and determine what is real signal versus noise, and only smooth out the noise, but often the result is not very natural looking, and you would have been better off keeping the original noise.
I use AstraImage or Registax for the sharpening of my lunar images, and then I use Photoshop for the final editing of tonal variation and the application of any functions such as Gaussian blur. When I have more time later this weekend, I will add another post here to expand on those details a bit. Tonal variation is very important on the Moon, with common mistakes resulting in an image that has significant regions that have been clipped to pure white, or conversely, is overall extremely dark with unnatural contrast. As long as the original data was not overexposed, these are relatively easy to fix.