Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

First (lunar) light with ASI183MC

  • Please log in to reply
14 replies to this topic

#1 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 16 April 2021 - 10:37 PM

I got the camera a while ago but I've been waiting for some other kit to come in, namely an f/6.3 SCT focal reducer. I finally received that yesterday, so today was the day.

 

This was shot at about 1 PM with the Moon near zenith. Even so, the seeing was poor.

 

1985 C8 + Celestron f/6.3 focal reducer/corrector, ASI183MC. 2 panel mosaic stitched in ICE, post-processed in Astra Image. Each panel was best 50% of 2000 frames stacked in Autostakkert. Green channel. 

 

Feedback/constructive criticism is very welcome and appreciated.

 

gallery_346195_16100_241918.jpg


  • Dave Mitsky, zjc26138, jimsmith and 4 others like this

#2 matt_astro_tx

matt_astro_tx

    Ranger 4

  • -----
  • Posts: 329
  • Joined: 19 Jan 2021
  • Loc: Dallas Area, Texas

Posted 16 April 2021 - 10:52 PM

That’s killer
  • Borodog likes this

#3 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 16 April 2021 - 11:00 PM

Thanks, Matt!

 

Here's a higher resolution version of the left of the two panels. Unfortunately the right hand panel does not support this resolution. Again, any feedback or criticism is appreciated.

 

gallery_346195_16100_153694.jpg


  • Dave Mitsky likes this

#4 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 17 April 2021 - 12:57 AM

And lastly, 8 hours later. This is why I bought this camera and the focal reducer/corrector. It eats poor seeing for breakfast and spits out beautiful images. Exactly what I hoped for.

 

Again, any feedback is welcomed. I'm always trying to improve.

 

gallery_346195_16100_188781.jpg


Edited by Borodog, 17 April 2021 - 12:58 AM.

  • Tom Glenn likes this

#5 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 17 April 2021 - 10:49 AM

This is a full scale crop at capture resolution of the second image, taken about 9pm. I forgot to mention that not only was the seeing poor, but this was actually shot through thick enough cloud that the Moon had a visible glow that unfortunately obliterated the beautiful Earthshine I was hoping to image. The poor seeing robs fine details, and the cloud/glow robs contrast, and I could only use about 300 frames for the stack. You can see how all of this lets the full resolution image down, but given all of that, I'm still quite impressed with the result. I can't wait for a clear night with seeing around 1" or less. This was easily > 3".

 

gallery_346195_16100_177664.jpg



#6 jeffry7

jeffry7

    Ranger 4

  • -----
  • Posts: 314
  • Joined: 07 Dec 2017

Posted 17 April 2021 - 11:00 AM

Hi Borodog,

 

Would you mind telling us your process?


  • Borodog likes this

#7 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 17 April 2021 - 12:36 PM

I would love to! I was just thinking of starting a "Lunar Imaging Tips and Tricks" thread, in fact. But I'll put the stuff related to these photos here.

 

So the first photo, taken during the daytime, is interesting. As I said, bad seeing, difficult to focus, broad daylight (~1pm), so bright blue sky. During capture I reduced my ROI down to 352x352 and put the square on the illuminated limb and used auto white balance, but this resulted in way too much red and way too little blue in the resulting image. It would have required a lot of fiddling to color correct, so I decided to go monochrome and only process the green channel. Here is a trick to remove the blue sky and preserve the color depth: stack to FITS format and then open the image in GIMP. I originally did this because it's a convenient way to isolate and export the individual channels (for channel by channel deconvolution when you have excellent seeing that supports it) and I was doing this to export the green channel. But then I thought about it and realized that in GIMP the FITS file retains its full 32 bits per channel bit depth. So I used the Curves adjustment to subtract out the sky from the bottom end and set the white point to be appropriate for downstream sharpening (around 75%), and the exported to 16 bit gray png. Voila, plenty of bit depth for downstream processing. If you don't do it this way, you run a very strong risk of posterization when you try to bring the sky down to black.

 

After that, all of my processing other than final conversion to jpg for posting on CN is done in Astra Image. I no longer use wavelets in Registax and haven't looked back; way too many parameters, way too complex and abstruse, not based on the physics of what happens to the image between the target and the sensor.

 

The very first thing I do before anything else is deconvolution. I use the Simple Deconvolution tool, Lucy-Richardson, Gaussian kernel, iterations 10, strength 5. These settings essentially never change. Don't turn on Suppress Halos, don't turn on Aggressive. I then adjust the radius and check the preview. I want to find the maximum radius I can get away with that doesn't look over-sharpened. Over-sharpened is that blown out low quality video look where you have clipped the whites and blacks, particularly at sharp edges. One of the key locations is at the edge of the illuminated limb. If you can see ringing at the edge, particularly clipped black just outside the limb, that is too much.

 

A little digression about Gaussian deconvolution. If you have perfect seeing and your scope is well collimated, to first order the Airy disk diffraction pattern is approximately a Gaussian distribution. The equivalent Gaussian radius of the Airy disk in pixels is approximately given by:

 

0.42 x (wavelength in nm) x (focal ratio) / (pixel size in microns) / 1000

 

If you are at critical sampling this simplifies to:

 

2.1 x (wavelength in nm) / 1000

 

So for the green channel, where I generally take the average wavelength to be 540 nm, this works out to be 1.13 pixels. If I were critically sampled and had excellent seeing, I would deconvolve channel by channel with the appropriate radii and I am convinced it makes a visible difference in color photographs, particularly where the saturation is turned up. For what it's worth I do not yet have an ADC (Atmospheric Dispersion Corrector). I have had some success in reducing atmospheric dispersion in post, but I haven't really "perfected" (wink wink, nudge nudge) this yet.

 

However, you rarely have excellent seeing, so usually the relevant Gaussian radius is not that of the Airy disk, but rather the seeing limited radius, which will be larger. A little digression within the digression. When you stack a lower number of frames, the seeing limited radius is actually smaller than when you stack more frames. This is because the effect of atmospheric turbulence over short time frames is actually more like bulk distortion than it is Gaussian blurring. Autostakkert finds the sharpest percentage of Alignment Points and averages them, and also skews and stretches slightly them to better improve alignment to each other and the reference frame. Theoretically the sharpest image would be formed by literally stacking 1 frame, with each Alignment Point using the sharpest frame for that point. In practice this doesn't work because it's very difficult to make individual (unaveraged) APs align because of noise. That's what averaging does; it reduces noise. The noise to signal ratio falls with the square root of the number of frames stacked. So you require typically some minimum number of frames to average to avoid stacking artifacts, unless you are very undersampled. In any event, presuming you stack enough frames to avoid stacking artifacts due to noise, you will find that fewer frames stacked results in a sharper but noisier image, while more frames stacked results in a blurrier but less noisy image. When I say blurrier, I mean that literally; to first order the averaging process is literally Gaussian convolution, and that's why Gaussian de-convolution is the most physically realistic option and gives the most realistic results in my opinion.

 

Anyway, I suppose you don't need to know any of this. The end result is that you need to just play with the Gaussian deconvolution radius until you find the value (or values) that you find pleasing. I typically find the maximum radius above which the result starts to look over-sharpened. I then decide if the resulting image supports the capture resolution. If not, I will resize by 50%. I don't generally futz around with downsizing by 90% or 70% or whatever. I want a sharp looking result,and if full resolution doesn't support it, I will go straight to 50%. And if that doesn't support it (either from bad seeing or because my focus was out) I will go to 25%. I always preview deconvolution but then only apply at what I judge will be the final image size. Remember if you resize by 50%, you need to cut your kernel radius by 50% as well. Sometimes you can get a *little* more aggressive with the radius at the reduced size, but not much.

 

After deconvolution, I will typically use a Convolution Filter for the final sharpening to make the details pop. I use the Mild or Strong sharpening filters, but I adjust the coefficients because if you have done your job well during deconvolution, the default coefficients will be way too high. I bring down all of the coefficients until the preview looks as sharp as possible but not over-sharpened. Remember, the way these sharpening convolution filters work is that they increase the weight of the central pixel and use negative weight for the surrounding pixels, to increase the pixel to pixel contrast. So the sum of all the coefficients should be 1, regardless of what your individual values are. When I next have a night with perfect seeing, I am going to attempt to calculate a set of weights for the actual Airy disk, diffraction rings and all, and see if I can improve on Gaussian deconvolution, with the goal of virtually eliminating ringing in the resulting image.

 

Next I will use Denoise. I'm not particularly that happy with the Astra Image Denoise tool, to be frank. It's nice that it separates luma and chroma noise, but the algorithm can sometimes produce artifacts that look worse than the noise in the first place. So I will typically under denoise, preferring some grain to weird looking artifacts and loss of detail. Besides, I kind of like the look of some grain; gives it a film kind of a vibe.

 

The very last thing I do is color correction and setting levels. The first thing I deal with is visible black clipping. I will raise the black point the minimum amount required to eliminate the visible black clipping, and then (this is important) I will move the mid-tones marker back down to where it was to keep the overall levels, then Apply Permanently. I will bring the white point down so that the smallest highlights are brilliant white but not clipping. These images are a little dark for my taste now that I look at them; the smallest highlights are not bright white. That was an oversight. In any event, the last thing I do is adjust the gamma, either via gamma directly or Curves, to brighten up the whole image without blowing out the highlights.

 

I'm sure Tom Glenn's or aeroman's or others' processes would blow mine out of the water, but I try to keep the budget, in terms of software cost and processing time, to a minimum and follow the KISS principle.

 

That was long-winded.


Edited by Borodog, 17 April 2021 - 04:13 PM.

  • 67champ and jeffry7 like this

#8 jeffry7

jeffry7

    Ranger 4

  • -----
  • Posts: 314
  • Joined: 07 Dec 2017

Posted 17 April 2021 - 02:26 PM

I would love to! I was just thinking of starting a "Lunar Imaging Tips and Tricks" thread, in fact. But I'll put the stuff related to these photos here.

 

So the first photo, taken during the daytime, is interesting. As I said, bad seeing, difficult to focus, broad daylight (~1pm), so bright blue sky. During capture I reduced my ROI down to 352x352 and put the square on the illuminated limb and used auto white balance, but this resulted in way too much red and way too little blue in the resulting image. It would have required a lot of fiddling to color correct, so I decided to go monochrome and only process the green channel. Here is a trick to remove the blue sky and preserve the color depth: stack to FITS format and then open the image in GIMP. I originally did this because it's a convenient way to isolate and export the individual channels (for channel by channel deconvolution when you have excellent seeing that supports it) and I was doing this to export the green channel. But then I thought about it and realized that in GIMP the FITS file retains it's full 32 bit bit depth per channel. So I used the Curves adjustment to subtract out the sky from the bottom end and set the white point to be appropriate for downstream sharpening (around 75%), and the exported to 16 bit gray png. Voila, plenty of bit depth for downstream processing. If you don't do it this way, you run a very strong risk of posterization when you try to bring the sky down to black.

 

After that, all of my processing other than final conversion to jpg for posting on CN is done in Astra Image. I no longer use wavelets in Registax and haven't looked back; way too many parameters, way too complex and abstruse, not based on the physics of what happens to the image between the target and the sensor.

 

The very first thing I do before anything else is deconvolution. I use the Simple Deconvolution tool, Lucy-Richardson, Gaussian kernel, iterations 10, strength 5. These settings essentially never change. Don't turn on Suppress Halos, don't turn on Aggressive. I then adjust the radius and check the preview. I want to find the maximum radius I can get away with that doesn't look over-sharpened. Over-sharpened is that blown out low quality video look where you have clipped the whites and blacks, particularly at sharp edges. One of the key locations is at the edge of the illuminated limb. If you can see ringing at the edge, particularly clipped black just outside the limb, that is too much.

 

A little digression about Gaussian deconvolution. If you have perfect seeing and your scope is well collimated, to first order the Airy disk diffraction pattern is approximately a Gaussian distribution. The equivalent Gaussian radius of the Airy disk in pixels is approximately given by:

 

0.42 x (wavelength in nm) x (focal ratio) / (pixel size in microns) / 1000

 

If you are at critical sampling this simplifies to:

 

2.1 x (wavelength in nm) / 1000

 

So for the green channel, where I generally take the average wavelength to be 540 nm, this works out to be 1.13 pixels. If I were critically sampled and had excellent seeing, I would deconvolve channel by channel with the appropriate radii and I am convinced it makes a visible difference in color photographs, particularly where the saturation is turned up. For what it's worth I do not yet have an ADC (Atmospheric Dispersion Corrector). I have had some success in reducing atmospheric dispersion in post, but I haven't really "perfected" (wink wink, nudge nudge) this yet.

 

However, you rarely have excellent seeing, so usually the relevant Gaussian radius is not that of the Airy disk, but rather the seeing limited radius, which will be larger. A little digression within the digression. When you stack a lower number of frames, the seeing limited radius is actually smaller than when you stack more frames. This is because the effect of atmospheric turbulence over short time frames is actually more like bulk distortion than it is Gaussian blurring. Autostakkert finds the sharpest percentage of Alignment Points and averages them, averages them, and also skews and stretches slightly them to better improve alignment to each other and the reference frame. Theoretically the sharpest image would be formed by literally stacking 1 frame, with each Alignment Point using the sharpest frame for that point. In practice this doesn't work because it's very difficult to make individual frame align because of noise. That's what averaging does; it reduces noise. The noise to signal ratio falls with square root of the number of frames stacked. So you require typically require some minimum number of frames to average to avoid stacking artifacts, unless you are very undersampled. In any event, presuming you stack enough frames to avoid stacking artifact due to noise, you will find that fewer frames stacked results in a sharper but noisier image, while more frames stacked results in a blurrier but less noisey image. When I say blurrier, I mean that literally; to first order the averaging process is literally Gaussian convolution, and that's why Gaussian de-convolution is the most physically realistic option and gives the most realistic results in my opinion.

 

Anyway, I suppose you don't need to know any of this. The end result is that you need to just play with the Gaussian deconvolution radius until you find the value (or values) that you find pleasing. I typically find the maximum radius above which the result starts to look over-sharpened. I then decide if the resulting image supports the capture resolution. If not, I will resize by 50%. I don't generally futz around with downsizing by 90% or 70% or whatever. I wan't a sharp looking result,and if full resolution doesn't support it, I will go straight to 50%. And if that doesn't support it (either from bad seeing or because my focus was out) I will go to 25%. I always preview deconvolution but then only apply at what judge will be the final image size. Remember if you resize by 50%, you need to cut your kernel radius by 50% as well. Sometimes you can get a *little* more aggressive with the radius at the reduced size, but not much.

 

After deconvolution, I will typically use a Convolution Filter for the final sharpening to make the details pop. I use the Mild or Strong sharpening filters, but I adjust the coefficients because if you have done your job well during deconvolution, the default coefficients will be way too high. I bring down all of the coefficients until the preview looks as sharp as possible but not over-sharpened. Remember, the way these sharpening convolution filters work is that they increase the weight of the central pixel and use negative weight for the surrounding pixels, to increase the pixel to pixel contrast. So the sum of all the coefficients should be 1, regardless of what your individual values are. When I next have a night with perfect seeing, I am going to attempt to calculate a set of weights for the actual Airy disk, diffraction rings and all, and see if I can improve on Gaussian deconvolution, with the goal of virtually eliminating ringing in the resulting image.

 

Next I will use Denoise. I'm not particularly that happy with the Astra Image Denoise tool, to be frank. It's nice that it separates luma and chroma noise, but the algorithm can sometimes produce artifacts that look worse than the noise in the first place. So I will typically under denoise, preferring some grain to weird looking artifacts and loss of detail. Besides, I kind of like the look of some grain; gives it a film kind of a vibe.

 

The very last thing I do is color correction and setting levels. The first think I deal with is visible black clipping I will raise the black point the minimum amount require to eliminate the visible black clipping, and then (this is important) I will move the mid-tones marker back down to where it was to keep the overall levels, then Apply Permanently. I will bring the white point down so that the smallest highlights are brilliant white but not clipping. These images are a little dark for my taste now that I look at them; the smallest highlights are not bright light. That was an oversight. In any event, the last thing I do is adjust the gamma, either via gamma directly or Curves, to brighten up the whole image without blowing out the highlights.

 

I'm sure Tom Glenn's or aeroman's or others' processes would blow mine out of the water, but I try to keep the budget, in terms of software cost and processing time, to a minimum and follow the KISS principle.

 

That was long-winded.

Hi Borodog,

 

Thank you for the detailed explanation. I had not thought about simply using the green channel for daytime shots. Interesting.

 

You are using several forms of sharpening in the middle of the workflow. This is contrary to advice I have read to save sharpening for the end. Is there a reason you do it this way?

(I can see a point for this. The sharpening steps affect the black point and the highlights and waiting until after sharpening to fix the curve they way you want it makes sense.)

 

Wouldn't it be better to denoise prior to stacking? If the denoise is any good, then you could potentially stack fewer frames and avoid stacking induced blur.


  • Borodog likes this

#9 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 17 April 2021 - 03:09 PM

A) I wouldn't know how to denoise prior to stacking (maybe PIPP, but I don't use it), but

B) No, you definitely would not want to do that. Stacking naturally increases the signal to noise ratio as the signal rises linearly with the number of frames but the noise only rises as the square root thereof. You only want to remove the noise that's left after stacking. If you try to remove it along the way you will inevitably be removing signal as well. Better to just let the stacking do its job.

 

Now, it is arguable that perhaps you want to denoise after stacking but before deconvolution. The problem with this is that the residual noise in the image is very hard to see in the image prior to deconvolution, hence it's difficult to judge how much denoise to apply. And since denoising can also create artifacts, these then tend to get amplified by deconvolution. I suppose I should experiment more with this. I would be happy to hear other's thoughts on this.

 

You are correct; I always sharpen first. The only exception is that if the unsharpened histogram is so close to clipping that sharpening will cause it to clip. Then I will compress the histogram a bit to leave room for deconvolution without clipping. I always try to get the raw image as sharp as I can before I go fiddling with color correction, levels, gamma, etc. Any linear operation I don't think the order matters too much, but for non-linear operations like gamma correction, it definitely seems wrong to do them first before deconvolution, and I think you could possibly get strange results if you do. 

 

I would be happy to hear other theories, though. I could always be wrong.


Edited by Borodog, 18 April 2021 - 08:31 AM.


#10 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,361
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 17 April 2021 - 03:09 PM

Mike, I'm glad you got the new camera up and running, and your images look like a great start, with very pleasing processing.  Daytime imaging of the crescent Moon generally has poor seeing which offsets the gain in altitude.  Currently, at Moon set, the angle of the ecliptic plane is very steep to the horizon, which means that the waxing crescent Moon is higher in the sky after sunset than at any other time of the year, which makes imaging after dark much easier.  Daylight sky brightness is additive to the signal from the Moon.  In addition to hiding the shadow detail, this additive property actually reduces the overall exposure you can use.  I noticed this last night, as I started imaging before it was fully dark.  If you keep your camera settings the same, you will notice your histogram start to drop as the sky gets darker, despite the fact that the Moon appears brighter.  This is counterintuitive, but is happening because the Moon is maintaining its same luminance value, and you are subtracting the sky brightness as it gets darker.  This not only reveals the shadows better, but also allows you to increase the exposure if desired, to collect even more light.  


  • Kenny V. and Borodog like this

#11 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 17 April 2021 - 03:13 PM

Thanks Tom. I'd definitely like to hear your opinions on order of operations in terms of sharpening (and what your approaches are), denoising (and what tools you use), and color, level, and gamma corrections.



#12 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,361
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 17 April 2021 - 09:15 PM

Thanks Tom. I'd definitely like to hear your opinions on order of operations in terms of sharpening (and what your approaches are), denoising (and what tools you use), and color, level, and gamma corrections.

Hi Mike.  The answers really go beyond what I can put in a post without it turning into a huge body of text, which would be very time consuming for both me and the reader.  The summary of my methods was conveyed in my ~2 hours presentation on YouTube, which I linked to here on CN.

 

https://www.cloudyni...tion-for-novac/

 

Of course this won't cover everything you're interested in.  I'm in favor of keeping things simple.  I do deconvolution first, on the unstretched image, using AstraImage, and then adjust the tonality in Photoshop afterwards.  I read what you wrote above, and I'm not an advocate of isolating any of the color channels from a color camera.  The "channels" are artificial anyway, because the colors were inferred from adjacent pixels.  Autostakkert does use a good method of debayering, which takes advantage of the natural dithering across frames, but I still see no need to isolate a channel.  If you find yourself wanting to do this frequently, then it probably would have been wise to purchase the monochrome version of this camera and take advantage of true independent channels across the full sensor.  Your method does work, but an alternative is to take the color image and then convert to B/W, using a program that lets you select the weight of each color channel in determining the luminous values, because as you probably know, converting from color to B/W is not as simple as many believe.  Here again though, if you find yourself wanting to convert to grayscale often, perhaps the monochrome camera would be a better fit.  

 

I sometimes use a very low amount of "denoise" in Photoshop, but I find my tolerance for denoise is very low, because it creates an artificial plastic look, whereas some "grain" looks OK to me in many cases.  

 

As for how to set the tone curve, that's up to individual preferences, although I think that a reference selection drawn around the well illuminated regions of the Moon should have average pixel values of at least 100-120, placing the average lunar surface at middle gray.  Then, some highlight regions will be brighter, and of course, some shadows much dimmer.  Personally I would process the image you are showing here with a much stronger gamma curve, but your version is very aesthetic.  


  • Borodog likes this

#13 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 18 April 2021 - 08:28 AM

Tom, as I said, the only reason I elected to use the green channel only was that it was a blue sky shot but the color balance was crazy out of whack. I've since discovered through some research that this was probably at least partially because I did not use a UV/IR cut filter. The second shot, taken some 8 hours later, is actually in color.


Edited by Borodog, 18 April 2021 - 07:51 PM.


#14 Borodog

Borodog

    Surveyor 1

  • -----
  • topic starter
  • Posts: 1,768
  • Joined: 26 Oct 2020

Posted 18 April 2021 - 08:53 AM

Tom. You're right; it looks much better with higher gamma.

 

gallery_346195_16100_360695.jpg


  • John_Moore and Tom Glenn like this

#15 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,361
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 18 April 2021 - 06:38 PM

Mike, I prefer the higher gamma version as well, simply because it matches up better with what we see visually.  But both renditions are nice, and often you may find the need to process images differently depending on what features you are trying to draw specific attention to. 

 

 

 but the color balance was crazy out of whack. I've since discovered through some research that this was probably at least partially because I did not use a UV/IR cut filter. 

If you do not use a UV/IR cut filter, the color balance will be incorrect, and it will be impossible to correct in processing, because the data captured is actually "wrong" with respect to the colors.  UV is somewhat irrelevant here, but the sensor is still very sensitive to IR wavelengths above 700nm, with much of this (but not all) being added incorrectly to the red channel.  This renders true color balancing impossible.  


  • Borodog likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics