Jump to content


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


My First attempts at Imaging the Moon

  • Please log in to reply
5 replies to this topic

#1 copper280z


    Lift Off

  • -----
  • topic starter
  • Posts: 4
  • Joined: 01 Mar 2019

Posted 09 November 2019 - 08:10 AM

A few months ago I decided to start on a very budget approach to a first telescope and maybe some imaging. I first built a pretty standard dobsonian with some used parts from the classifieds here, and some new inexpensive stuff from other sources. I ended up with a 6" PM with a 1200mm focal length.


I knew tracking was an issue, but then I learned about lucky imaging and decided I'd give it a shot. This past week I finally got a camera that will come to focus, an IMX174, one shot color, based industrial camera from ebay. I also wrote a python script to do the image alignment and stacking, because I thought it was fun. It mostly uses OpenCV, and handles the alignment totally without user input.


Monday was my first attempt, it was very hazy, windy, and generally not conducive to good imaging, but I had new shiny toys, so I tried anyhow.

500 frames

0 gain

27ms exposure

Top 5% frames selected



Thursday was much clearer, so I took many shots with the moon starting in different positions in the fov, with the intention to build a mosaic at some point. This is only one of the shots, stacked and processed on its own.

500 frames

0 gain

10ms exposure

Top 20% of frames selected

Slightly altered stacking algorithm, with slightly different deconvolution parameters in startools.



I didn't bother with any calibration frames for either image.


I'm pretty pleased with the second one, but I see some images out there with far, far, more detail and I'm wondering how they do it. Most of them use larger scopes of some catadioptric layout, as well as tracking. I don't think tracking is it, because of the very short exposures in lucky imaging. Is it abberations that aren't well corrected in the Newtonian layout? I'm still climbing the learning curve, so maybe it's just editing and capture skill. Every image do at this point ends up better than the previous one.


Anyhow, I'm curious what the community thinks of these and if there's any advice on finding some more detail.

Edited by copper280z, 09 November 2019 - 08:17 AM.

  • Carol L, marsbase and Astroman007 like this

#2 Wouter D'hoye

Wouter D'hoye

    Surveyor 1

  • *****
  • Posts: 1706
  • Joined: 27 Jun 2003
  • Loc: Belgium

Posted 09 November 2019 - 10:11 AM



not bad results... especially if these are your first ones.


To get more detail the first thing you need to do is use sufficient magnification / image scale. You use a 150mm scope and and IMX174 chip. This means that the ideal samping will be at around f/20. or even a little more when seeing permits. However, this will make tracking considerably harder as the moon will drift across the fiels quite rapidly. If you have a 2x barlow you could experiment with that. Will get you at f/16.


Next. get good focus. And I'd say that is probably the hardest part of imaging the moon. Accurate focus is critical to get best results. Focus on subtle details that pop up at moments of good seeing. Take your time for focusing! and for sure do not try focusing on the lunar rim.


Try to image when the moon is high in the sky. The higher in the sky the thinner the layer of turbulent air one has to look through. Chances for good seeing will be better.


Take plenty of frames. I usually take between 2000 and 5000 frames.


Record SER files. These have no compression and the actual raw data. Try to keep expsorure times short, do not be afraid to use relatively high gain. Use a red filter as longer wavelnehts are less affected by bad seeing. With good seeing use an orange or blue filter.


use autostakkert to process the SER files. In autostakkert I usually use about 10 to 20% of the captured frames.


Use registax for wavelet sharpening or other dedicated software to perform either wavelet sharpening or deconvolution sharpening.


Further process in photoshop/gimp etc... (unsharp mask, high pass filtering, noise reduction, levels, curves,...)


That's in a nutshell the basic pointers i can give you without going into too much detail.


And most of all.. enjoy yourself!


Clear skies.



#3 RedLionNJ



  • *****
  • Posts: 3739
  • Joined: 29 Dec 2009
  • Loc: Red Lion, NJ, USA

Posted 09 November 2019 - 11:48 AM

As Wouter said, you're at a little less than f/8 right now. With the imx174, you ought to be trying for nearer f/30. This is best achieved by adding the minimal number of pieces of glass possible -> i.e. a single Barlow or PowerMate.


Since you're using a Dobsonian, that's going to make the moon's surface move across your sensor significantly faster - a challenge, but not an overwhelming one.


Also as mentioned, focus & collimation are incredibly important in high-resolution imaging. That just takes time to master, although there are tools to make both easier.


The final aspect which you may not have yet taken into account is the optimal way to stack OSC data - Emil put a lot of time and effort into making AutoStakkert handle it with no resolution-loss due to classic debayering. This may be a time it behooves you to use a tool somebody else has already developed (at no cost to you). While your own effort to develop your own software for such purposes are beyond admirable, a ton of work (and testing on tens of thousands, if not more, images) has already been carried out by others, resulting in the likes of AutoStakkert and Registax,


When you get really good seeing conditions and everything else is optimal, AutoStakkert and Registax are all you need for calibration, alignment, stacking and processing.


If you're filling the sensor with data, you may also want to consider flat calibration.



Clear, steady skies :)



#4 copper280z


    Lift Off

  • -----
  • topic starter
  • Posts: 4
  • Joined: 01 Mar 2019

Posted 09 November 2019 - 12:24 PM

I have used AS!3 on some of these, but I must have done something wrong, because the unedited stack didn't seem to be as sharp as what I got out of my script. I'll revisit that and see if there's any improvement to be had.


I think the comments on focusing are very on point, I tried zooming in and watching individual features, but with the dob mount and the image scale any time I touched the focuser I lost the image entirely and had to wait for it to stabilize. Looking at some of the data, I may have played with the focuser in between some of the shots, as some appear to be sharper than others.


I also tried a 2x barlow, but with he sensor and focal length it would be stitching several stacks together, which I only just figured out this morning with this image. This is 2 prime focus stacks stitched together and edited in startools. I think the image that makes up the bottom half of the composite may have been in sharper focus.




#5 copper280z


    Lift Off

  • -----
  • topic starter
  • Posts: 4
  • Joined: 01 Mar 2019

Posted 10 November 2019 - 12:29 PM

I started processing an image with registax, I think that may have been a big missing link in my workflow. The startools wavelet sharpen module is NOT comparable. Images to follow when I get some time.

#6 copper280z


    Lift Off

  • -----
  • topic starter
  • Posts: 4
  • Joined: 01 Mar 2019

Posted 11 November 2019 - 06:21 PM

Here's an image stacked with AS!3, drizzled 3x, sharpened in registax, then deconvolved/played with in StarTools, then some final, very minor, tweaks in darktable. Registax appears to be a big help, but I definitely see some artifacts from using it. I set it to the largest processing area, but the seams are still apparent. I applied a 1.5pix blur to the final image before downsampling it to 1920xWhatever to try to hide them, and it helped some.




This image is the same basic workflow, with no drizzle, but at 2400mm FL. I don't really see that the barlow helped me at all in this case.




Both of these images feel a little over cooked in the editing, to me, and still seem to be lacking in fine details. Thinking through it, I think I may have made a few rather critical mistakes. 


1. The scope was in my living room, it was somewhere between 25 and 40F outside, and I only gave ~30min for it to cool before starting. This was probably not enough, and I think it contributed to thermal currents in the tube, and I didn't realize it when watching the video at first.


2. I was shooting in RGB8 mode, I misunderstood what the modes meant and didn't realize there was a RAW16 mode, which outputs the bayered data at full bit depth. RGB8 interpolates the data on the camera FPGA and sends that. RGB8 and RAW16 have the same bitrate, so no framerate loss, luckily. I was shooting with compression off, which is good.


3. I should spend more time on focusing, I may try to add a stepper motor to my focuser, it seems like a pretty easy job.

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Recent Topics

Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics