REVIEW OF SUMERIAN OPTICS ALKAID 16” TRAVEL SCOPE
Nov 26 2015 05:38 AM by alexvh
Astrotrac TP3065 Pier Review
Nov 20 2015 08:03 AM by James Waters
Apo-tmosphere: Gutekunst ADC Review
Sep 23 2015 11:18 AM by pbsastro
Optolong LRGB Filter Testing and Comparison wit...
Sep 22 2015 01:41 PM by turbo399
First Light Review: Teeter Custom TT Planet Kil...
Sep 04 2015 12:57 PM by Zellmer
The Baader Planetarium Morpheus
Aug 20 2015 10:45 AM by wapaolini
Book Review: Astro-Imaging Projects for Amateur...
Aug 15 2015 10:08 PM by Kenny2004
The Baader ASTF White Light Solar Filter
Aug 03 2015 06:28 AM by wapaolini
Getting Started in Solar System Imaging
Discuss this article in our forums
Getting Started in Solar System Imaging
by Paul Jones
While this is long, it is not intended to be an exhaustive guide but, rather, an introduction to solar system – primarily planetary – imaging. During final processing, DSO imaging and solar system imaging bear some similarity but in the equipment used and image acquisition, the two fields are quite different. So, if you’ve never imaged anything or have been shooting DSOs for awhile and are interested in solar system imaging, this introduction will, hopefully, get you off and running.
Imaging objects in our solar system (moon, planets, planetary satellites, and, of course, the Sun (never look directly at the Sun in an unfiltered scope, likewise, don’t point your camera at the sun in an unfiltered scope) is a rewarding endeavor that differs significantly from deep-sky object (DSO) imaging. The images produced using modern (circa 2013) equipment and techniques are far more detailed than visual observation and far, far more detailed than ground based images from the 20th century. Solar system image quality using amateur equipment has increased dramatically on a yearly basis since the beginning of the millennium.
Additionally, light pollution has little effect on solar system imaging and the demands of tracking (and, therefore, the mount) are not as high as in long-exposure DSO imaging. This does not mean, however, that it is a trivial exercise.
If you are interested in entering this field, the text below should suffice to point you in the right direction and give you a general description of what the modern techniques are and how to get started. Once you’ve started, you’ll find a number of online resources that fill in details and give helpful tips.
Briefly, the technique is to record a video (either .avi or .ser files) of an object and then use software to rank the individual frames by quality, align them and then stack X% of them to produce a final image. Generally speaking, the more frames you record, the better, so long as individual frame quality is good enough.
The idea is that you will record thousands of images such that some will be taken when the atmosphere settles down to give good seeing. Experienced planetary observers know that even on a night of average to below seeing, occasionally, perhaps only for a half second, the seeing stabilizes and a clear image results. In the technique described here, one takes advantage of that by being sure to record those moments of clarity in such a way that several such moments can be added together to produce a final image. Thus, the single most important piece of information in deciding the quality of images using this technique is the seeing. The phrase “seeing is king” is completely true. An image made with modest equipment on a night of excellent seeing will far outshine an image made with expensive state of the art gear on a night of mediocre seeing. If a night has horrible seeing, try some low power observing. If your location very rarely has good seeing, you might reconsider entry into this field.
Finally, in many places below, an opinion is given. This is always the writer’s opinion and there may be other ways of accomplishing the same task – this is especially true about products and image processing. Again, this guide is meant to answer basic questions and get you started. Go online and read the vast amounts of information there for more details and opinions.
Seeing you can control – SEEING IS KING
The single most important determinant of the ultimate quality of an image recorded at long effective focal length is the seeing at time of video acquisition. One part of seeing is well up in the atmosphere and beyond the control of the imager. The other is closer to the ground and somewhat in the imager’s control. You want to select an observing site that provides stable air, a scope that is thermally equilibrated and you want to image your target when it is well above the horizon.
- Do not set up on concrete or asphalt; these materials absorb heat during the day and re-radiate it at night. Recall times you’ve seen “heat waves” emanating from roads. This re-radiation will have dramatic effects on the image reaching your chip. Setting up in a grassy field near water is ideal. Sites that are in the lee of a mountain range are poor due to turbulence coming off the mountains. Sites in neighborhoods or cities are often more turbulent than rural locations due to heat sources such as houses, businesses, roadways, etc. (This is actually good advice for any type of imaging/observing).
- Make sure your scope is in thermal equilibrium; radiation from your primary mirror, tube or corrector can cause fluctuating images. Most Newtonians today come equipped with fans to cool the primary mirror: use them. Protect the scope from excessive heating during the day (e.g. don’t set a black SET out at midday for imaging later that night). Store the scope in such a way that it isn’t a long way from equilibrium when you take it out in the evening. If at all possible, set up the scope hours in advance and use some form of active cooling.
- Avoid heat sources (such as you and your computer exhaust) getting too close to your optical tube while observing/imaging.
Again, achieving thermal equilibrium is a deep topic for which there is abundant information online and beyond the scope of this guide. Improving collimation (see below) and thermal equilibrium are the two “easiest” ways to greatly improve your solar system images. It costs nothing (or relatively little) to get this right. It only requires practice and patience.
When selecting an imaging target, select one that is at least 30 degrees above the horizon (and, preferably, higher). Atmospheric turbulence increases dramatically as the object gets closer to the horizon, so avoiding imaging low altitude objects will greatly improve your images. This means that for the period 2013-2018 or so, Saturn is going to be a challenge from the northern hemisphere and well suited for those south of the equator. In contrast, Jupiter in 2014-15 is a great target for northerners and poorer for southerners. Of course, we have little control over this, so if the planet never gets very high, try to image when it culminates.
One real advantage to imaging site location for solar system objects is that light pollution can be mostly ignored. Don’t put your imaging target behind a streetlight. Otherwise, most targets for the method described below are bright enough not to worry about city lights.
Selection of Scope
Essentially, any telescope can be used for solar system imaging. Ultimately, an effective focal length on the order of 2500-12000mm will be desired with focal ratios of between 15-30 (some objects (and imagers) can take slower focal ratios). Therefore, the most important criteria (this will sound familiar) are quality and aperture. Large SCTs are very common amongst “elite” imagers due to the ease with which they obtain a long focal length but Newtonians are coming on strong. Refractors are rarely large enough to compete in the “elite” imaging world but if you have a good one and are just getting started, a refractor will do fine. To get started, you are probably better off using your largest scope that can track. If you get really serious about solar system imaging, you can always upgrade the scope later. So, go ahead and use your existing scope to get started.
Whatever telescope you choose, collimate it. Solar system imaging is carried out at long effective focal lengths (EFLs) – high magnification. Collimation is, thus, critical to producing a high quality image. If you have mostly been a low to medium power observer of larger objects, you may not yet have learned to collimate your scope as tightly as necessary. The act of collimation will vary by scope and, in any case, is beyond the scope (no pun intended) of this guide. However, it cannot be emphasized enough that collimation is a critical part of producing a high quality solar system image. Thus, you will want to ensure you can accurately collimate your scope and frequently check that it holds. On a night of good seeing, use your imaging setup to check collimation on a star. This will ensure that the entire imaging train is collimated.
In short, collimation can be a pain, but it is absolutely vital.
Selection of Mount
At the long focal lengths required for solar system imaging, tracking is essential. However, individual exposures are very, very short (on the order of milliseconds) so the mount can be alt-az and more heavily loaded than in DSO imaging. During the acquisition of the video, the field may/will rotate slightly using an alt-az mount. However, the stacking software can accommodate some field rotation. In general, an accurately polar aligned equatorial mount is “better” in that there will be no field rotation to potentially mar alignment and stacking of individual frames. With that said, if you own an alt-az mounted scope, solar system imaging is probably the easiest entry to imaging. Also, if you own an alt-az and are just looking to get started, there is no need to buy an expensive new mount.
Selection of Computer
You will need a computer to record the video files. A typical 8-bit monochrome 640x480 video being recorded at 60 frames per second (fps) requires a bitrate of 148Mbit/sec. 5000 frames of such a video produces a file that is approximately 1.5 Gb in size. Therefore, you’re going to need a computer that can handle those data rates and a hard drive(s) that can store the raw files. A USB 2.0 camera (which in 2013 is about the slowest you can get) is sufficient for the frame rate and chip size given in the example. However, USB 3.0, GigE or Firewire allow faster rates.
Most imagers end up with a lot of external data storage and individual video files will take up a lot of space. As I write in mid-2013, after a little under three years of imaging, I have recorded approximately 4.3 Tb of planetary data. I haven’t saved all of it, but enough that data storage is a significant concern.
You’re also going to need a processor that can handle the large file sizes during image processing.
Most laptop computers sold in 2013 are up to the task. If you have an older computer that only has USB 1 capability, you’ll need a new one. That computer probably wouldn’t have been able to handle the large data files, anyway.
Much, if not all, of the freeware used in solar system imaging runs on PC and can be tricky to get going with a Mac. If you have a choice here, choose PC. If not, explore the many online forums for advice on getting the software to work on your Mac. Linux can, and is, also used.
Selection of Camera
One needs a camera that is capable of recording video files of high quality (raw files). The most common file extensions in solar system imaging are .avi and .ser files. If you record in some other format, you’ll have to convert the file to one of these extensions.
In contrast to DSO imaging, the objects you’ll be targeting are very small so that a large chip is unnecessary. A large chip may even be a detriment if it slows the rate at which you capture individual frames. For planetary imaging, a 640x480 chip is more than sufficient. In lunar and solar imaging, a larger chip is desirable so that you can capture more of the surface. Even there, you’ll want a fast frame rate and want to watch that your scope produces a high quality image across the chip face.
(A note about H-alpha imaging. If you want to use your H-alpha solar scope to image, you need a monochrome camera. You can use a color camera but only a third of the Bayer mask that makes it a color sensor will be sensitive to the H-alpha light – this lowers resolution. There are also cases where the solar scope has a “sweet spot” that isn’t very large. If the sweet spot is smaller than your chip, your image will be uneven.)
So, you want high sensitivity and fast frame rates. The real revolution in modern solar system imaging was the recognition that ordinary webcams can do all that is needed fairly well. If you want to get started really cheaply, find a webcam you have laying around, remove the lens and affix a 1.25” barrel such that the chip is centered in the barrel (when I first did this in 2010, I used a Microsoft Lifecam and epoxy). You’ll probably be surprised how effective this is. In my case, I caught the bug and have been buying more and more capable cameras, but for many, a simple $40 webcam may be sufficient.
Again, in general, the more frames you collect the better. A good planetary camera (as of 2013) will have these features:
- High frame rate (at least capable of 60 frames per second (fps) but up to 100 is better)
- Fully adjustable frame rate (you want to be able to select the fastest frame rate your exposure allows)
- Region of Interest capability (ROI, this feature lets you select just a small part of the chip to record. For instance, if you image Mars, which is very small, you might select an image size of 400x320 rather than 640x480. This produces a video file of smaller size and allows faster frame rates.)
- Either have no UV/IR filter, a removable filter or a very high quality filter (very often, the filter that comes with the camera is either of low quality or cuts off too much of the UV and IR (much in the same way that the filter in stock DSLRs cuts out the H-alpha line). Fortunately, the filters in planetary cameras are easier to remove, if you need to). A UV/IR filter is necessary for color imaging. If you have a monochrome camera, you’ll want no filter incorporated in the camera, relying instead on external filters to select certain bands.
Selection of monochrome (mono) or color camera
You have the choice of using a monochrome or color camera. Monochrome cameras produce a black and white (greyscale image) while color cameras produce a color image. Straightforward. However, the color camera utilizes a Bayer matrix laid over the chip (that cannot be removed). In the matrix is a mix of filters sensitive to red, green and blue light. Each pixel in a color camera responds to only one of these three primary colors and, thus, overall resolution is decreased. There is also now more data and, so, file sizes increase and frame rates decrease. During processing, one must also “debayer” the data to get the color information – don’t worry, the software does this for you.
Therefore, a monochrome camera has increased resolution and frame rates. Of course, the final image is greyscale and you probably want a color image to show your friends (we all do). A color image can be produced by recording three separate images, one made with a red filter in front of the chip, another with a green filter and the last with blue. One then uses software to combine the three into a color (or RGB) image. It is also possible to add a fourth channel that is commonly, but not always, shot with a UV/IR cut filter, which is called a luminosity (or L) channel. In this case, the detail in the object is provided by the L and the color from the RGB channels. One may also use another image for the fourth (detail) channel such as an infrared (IR) image or the R channel. (One does this because IR and R light is usually less affected by atmospheric turbulence (seeing) than shorter wavelength green and blue light – very often one will record R, G and B images and find a beautifully detailed R image, an okay G image and a mushy, ugly B image.)
So, your choice is mono or color: do you mostly want to produce a color image? Is seeing at your location often mediocre, so that it will be hard to capture three separate channels all in good seeing? If so, go with the color cam. However, a mono cam will offer the possibility of IR and UV imaging, selecting only IR or R when the seeing isn’t as good and give slightly better resolution when the seeing is really good. If money is no object, or an object you treat with scorn, buy one of each. Otherwise, the mono camera will be the most versatile while the color camera will be the easier with which to produce a color image.
As of mid-2013, the leading makers of solar system imagers are: Point Grey, ZWO Optical, and The Imaging Source. Respectively, the Flea3 (GigE and Firewire), ASI120 (mono and color) and DMK/DBK21-618 are very good planetary imagers. The ASI120 has a larger chip (and ROI capability) and so is also a very good lunar and solar camera (however, the ASI models seem a little more susceptible to Newton’s Rings, though this is a concern with all cameras). Point Grey and TIS make larger chipped cameras that are very good at solar and lunar imagers. The Point Grey series beats out TIS in that it allows fully adjustable frame rates and has ROI capability, which TIS does not. However, imaging cameras from any of these sources are very good and have been used to produce excellent images of solar system objects.
Please note that this is not intended to advocate any particular camera. There are other cameras available and one should explore the options. Perhaps the best way to choose is to take a look at the images that get posted online and decide which you’d like to emulate. Then check what equipment they used to produce the image. Replicating the equipment won’t replicate the seeing, skill or luck but it will eliminate one variable. One note about choosing a camera: operating a “higher end” camera is not inherently more challenging than an “entry level” camera. They aren’t mechanically more complex and all cameras have, more or less, the same set of adjustable settings.
A Note on DSLRS: Many DSLRS, particular Canon variants, have video modes. If your DSLR has a crop movie mode that can achieve high frame rates and deliver a raw video file, it can be used for solar system imaging. If it can only record video using the full frame and with significant compression, it is less useful. For a number of reasons, DSLRs are not as popular with dedicated solar system imagers. However, if you already own one, it is a lot cheaper to start with that than the dedicated planetary and solar system cameras listed above. Go ahead and try it using the other tips here as well. Very fine solar system images have been made using DSLRs in crop movie mode. As always, many more details are available online.
You will need to use some piece of software to control the camera. In all likelihood, the camera you buy will come with software. This might be a perfectly fine program to use. Two other programs bear mentioning: SharpCap and FireCapture.
Both programs are freeware, so you can play with them to decide which you prefer. In July 2013, FireCapture is probably the most commonly used capture program and with good reason in the writer’s opinion. SharpCap is a bit simpler in layout and seems to work right out of the box with a slightly wider variety of cameras. The writer’s general impression is that most people start with SharpCap and soon migrate to FireCapture. Other freeware is also available.
Whatever software you choose, it should allow you to control frame rate, exposure, gain, ROI, video length, extension type and give you a running histogram (more on this later). Ideally, if you have a filter wheel or computer controlled focuser, the capture software will be able to communicate with these. There are plenty of freeware options so don’t feel you need to pay big money for camera control software. But be sure to donate to the good ones.
Your camera requires the ability to fit a 1.25” barrel so that it can slide into your telescope’s focuser. With dedicated solar system imagers such as the ones listed above, this will probably be included. If not, C-mount 1.25” adapters are widely available online. There is no need for a 2” barrel as the chip is much, much smaller than even 1.25”. However, if you have some means of using a 2” barrel on your camera, it will work fine.
In general, the fewer pieces of glass you have between the primary and the chip, the better. Therefore, never use a diagonal for imaging. Avoid using multiple barlows. The best train appears to be: primary > barlow > filter > chip. The barlow can be eliminated if lower power is desired. A filter wheel can go ahead or behind the barlow for different EFLs. Affix the various pieces such that they are stable. You ultimately will want to produce an optical system that works at around f/20-30 (I tend toward the lower number). Experiment with the various sequences of barlow, wheel, filter, etc. and various barlows until you achieve an image size and focal ratio you like.
You can find online formulae to calculate an “optimal” focal ratio for a given camera pixel size (roughly 5*pixel size in microns). Ultimately, you will probably want to do this but, getting started, it adds more complexity. You shouldn’t be aiming for perfect out of the chute but, rather, a good image that you can work to improve. As in all astrophotography, imaging at shorter focal length makes everything easier. In the beginning, do not try to fill your chip with a planet. Start with a reasonable focal ratio, perhaps f/15 or so, and get your routine down. Then you can slowly add focal length (and, thus, image size).
A note on filter wheels: Many/most imagers use a filter wheel and filters. For solar system imaging, a 1.25” wheel is sufficient. If you own a 2” wheel for DSO imaging, it can be used but is overkill, as the chip will only be ¼ of an inch or so. The wheel is used in solar system imaging exactly as it is in DSO imaging except that it is almost always necessary to record sequences of R, G, B rather than a set of R, then a set of G, then a set of B as in DSO imaging due to planetary rotation. If you ultimately want to put an R, G and a B together to make a color image, all three need to be recorded within a few minutes of each other (see below). But setup of the wheel is the same as in DSO imaging.
The camera chip at long EFL will have a very narrow field of view. The planet is a small target. Thus, it can be challenging just to get the planet on the chip. If you have a goto scope and have done a very, very accurate alignment (and get a little lucky) you might be able to simply goto the object. If not, using a guidescope (or finder) that is very accurately aligned with your primary scope can be useful in putting the target on the chip. Another system is to center the planet at moderate to high power with a reticle eyepiece, then insert the imaging gear.
If you first center with an eyepiece, do not use a diagonal – you shouldn’t image with a diagonal (this can’t be repeated enough) – and the focus for a camera without a diagonal versus an eyepiece with a diagonal will be very, very different and most likely render the target invisible once you switch. If a diagonal must be used, practice focusing the camera after target acquisition and note the direction (in or out) that the focuser must be turned.
Also, obviously the moon is a much, much easier target to acquire. For this reason, close-up lunar photography is an excellent way to practice planetary imaging. As a bonus, along the way you also get beautiful lunar images. You’ll be amazed at the detail that is visible and most likely become much more enamored of our satellite.
A common worry for the beginning imager is how to achieve focus. Indeed, it is a critical part of acquiring a good image. However, focusing at high magnification on a video screen is a long way from focusing at the eyepiece or at lower power in DSO imaging. Very often, DSO imagers try to bring their focus technique to solar system imaging or advise others to do so. These techniques, such as Bahtinov masks and minimizing a stellar FWHM, do not translate well, however. A Bahtinov mask must only be used with a point source – which we do not have in solar system imaging (occasionally a planetary satellite can be treated as a point source but then they are very dim and, thus, poor targets for a Bahtinov mask). The same holds true for minimizing the FWHM.
One could focus on a star and then move to the planet. However, there are two problems with this. The first is another common worry of the beginning imager: finding the object (see above). The second, more important problem, with focusing on a star and moving to the planet is having movement in the imaging train. If it is perfectly locked down, focusing on a star then moving to planet will work. However, if there is any slop, or movement, in any of the pieces of the imaging train, then the focus will still be off. Additionally, if you are imaging with multiple filters, it is unlikely that they are all exactly parfocal and, thus, you’d have to return to the star and back between each channel.
So the most commonly employed method is the simplest: focus on the planet by eye. Many imagers lower the gamma (in the camera control software) for this step. With the eye about a meter from the screen, focus as best you can. Reset the gamma if necessary and image. An electronic focuser is a wonderful treat (some say absolute necessity) here. Take your time with focus and don’t chase the seeing. If the seeing is very steady, focusing is fairly trivial. However, if the seeing is fluctuating wildly, you can end up changing focus with changes in the seeing with the result that you’re never really sure you’re focused. If the seeing is too bad to focus, it is probably too bad to image.
How big will the image of your target be? The following formula can be used either to predict how big a target will be on the chip or, as is more commonly done, to calculate the effective focal length of the imaging train from an existing image of a target.
The formula (taken from Anthony Wesley):
EFL (mm) = [206.265 x (object size in pixels) x (pixel size in microns)] / (object size in arc-sec)
So, if one has a camera with 4 micron pixels and images Mars at 10” with an EFL of 6000mm, the resulting image of Mars will be approximately 72 pixels in diameter. It will clearly fit on the chip of the imaging cameras discussed above. If ROI is available, a very small area can be chosen.
Alternatively, if one has an image of Jupiter, at 40” (found using an ephemeride), in which the diameter of the planet is 425 pixels, one can calculate that the imaging train had an EFL of 8766mm. This is probably the most reliable method for calculating the EFL of a given imaging train.
Finally, if the EFL and pixel size are known, one can calculate the apparent size of the object. For example, if you know your imaging train has an EFL of 7200mm and the disk of Saturn occupies 174 pixels, one can rearrange the formula to learn that the disk of Saturn has an apparent size of 19.9 arc-seconds.
Once you have a planet (or moon or sun) on chip and focused, you need to acquire a video. Make adjustments to exposure and gain such that the histogram is not “full”. A full histogram is one in which the maximum pixel value is full up against the right side. When the histogram is full, data is lost at the bright end – the image is effectively overexposed. In this case, lower either the gain or the exposure (generally, leave the gamma at the default setting) so that the maximum pixel value is well off the right side. Various targets (and imagers) require different histogram fills. A (very) general guide is: Jupiter 70-80% full (that is, the maximum pixel value is 70-80% of the way from the left side to the right side), Mars 50%, Saturn IR, R and G 50%, Saturn B 40%. Again, these are very rough values – you should experiment.
In order to achieve a given frame rate, the exposure must be sufficiently low. That is, it is physically impossible to record 100 1/50s exposures in a second. So, if your exposure is set at 1/50s, 50 fps is the maximum frame rate you can use. Gain can be adjusted upward to allow shorter exposures and faster frame rates. However, increasing gain increases noise, which requires more frames to give a smooth image at the end. For dimmer objects, high gain is a must. For brighter objects, lowering gain can lead to a smoother final image. For the moon and sun, very low gain (and even 12- or 16-bit video) can be used, resulting in very smooth images with few frames used in the final image. Again, you can read all about this (endlessly, in fact) online but experimentation is key.
Next, you need to select a recording length. If you are using an equatorial mount, you can ignore effects of field rotation. Using an alt-az mount, I find videos longer than about 3 minutes (depending on a lot of factors) result in the stacking software (see below) not being able to fully overcome field rotation and the edges of an image (especially Saturn due to the ring-tips) blur. However, even without field rotation, something else is moving: the planet. Depending on focal length, you should limit the video length to minimize the effects of blur due to the planet itself rotating. There is software that can overcome even this (WinJupos, see below). But, in general, you will want to limit Mars and Saturn recordings to 5-8 minutes and Jupiter recordings to 2-3 minutes. These are maximum values and many imagers use shorter recordings. As focal length increases, the limit decreases.
Remember that if you’re combining three channels into an RGB image, you’ll want all three collected within the given time allotment. Even the Sun and Moon can give problems as the Sun is dynamic (especially in H-alpha, where you probably need to limit to 20-30 second recordings) and the incident sunlight on the Moon changes slowly. Lunar recordings using an equatorial mount can be 10-20 minutes long. Likewise images of Uranus and Neptune where there simply isn’t a whole lot of detail to blur.
Having selected a time limit, choose an extension (usually .avi or .ser) and…hit the start button. Be sure your tracking is good enough to keep the planet on the chip (you may want to use a hand control to keep it on chip, especially if you’re using a Dob where tracking isn’t quite as accurate) but there is no need to keep it as precisely guided as in DSO imaging. In fact, having the planet wander around the chip a bit is a de facto dithering, which will smooth out some noise and correct for hot or stuck pixels. If your tracking is good enough, this is a good time to head inside for a warm drink, check on the kids, hit the loo, etc. Most camera control software will let you set up automatic routines that record a series of video and include your filter wheel to get all the channels.
As in DSO imaging, the computer control can be set up remotely.
However, the most romantic method is to watch the planet guide through the heavens as your computer records the video. Not only will you be surprised how much detail is visible in the video feed, you may catch an impact event. Much, much, much more likely as you will be present when, inevitably, something somewhere fouls up.
Aligning, Stacking, Processing
Once you have a video (or, more likely, a bunch of videos), it’s time to extract an image. The first step is to produce a raw stacked image. The common programs that can do this are: Registax, Autostakkert and Avistack. The first two will accept either .avi or .ser files while the latter, obviously, will only accept .avi. The goal of this guide is not to give you a play by play in how to use them: experiment (again). The basic gist, though, is that you load a video file into the program, select alignment points (or have the computer do it) and then set some values in the various boxes and hit start. The program will then rank each frame of the video (and, if multiple alignment points used, each alignment point) and then align them. Finally, you can tell the program how many frames to stack. Again (and again and again) experimentation is key. But, generally, if you had excellent seeing, use a high percentage of the frames, perhaps 75%. If you had sort of iffy seeing, use fewer. Try different numbers of frames to see the effect on the final image.
As a rule, the more frames you stack, the less noisy the image. The fewer frames you stack, the higher the quality of the frames used. You want to balance high quality frames with reasonable noise levels.
As this is written in July 2013, Autostakkert is, in the writer’s opinion (and opinion’s do vary) the best program to use to stack solar system images but they’re all three fine programs and free. Don’t forget to donate.
The program will produce a stacked image as a .tif or .png file. That file then needs to be sharpened. It is in the sharpening that the image either comes to life or is violently killed. There are two methods for sharpening the raw stack: adjusting wavelets and deconvolution. These are powerful techniques and can easily be overdone. A good piece of advice is that if you think you’ve overdone it, you almost certainly have and should back off a lot. If you think it looks good, you should just back off a little.
For wavelet adjustment, the current (July 2013) standard is Registax 6. Open an image in Registax and then adjust the six sliders on the left. There are no simple guidelines for how to do this. (Experimentation…etc. etc.). You are looking to sharpen the image without introducing noise or other artifacts. Occasionally the stack has artifacts that don’t become apparent until sharpening. If this happens, go back and re-align and stack the image.
For deconvolution, Astraimage (not free) is an excellent choice. Lucy-Richardson deconvolution is a powerful tool to sharpen planetary images and can be followed with light wavelets in Astraimage.
Be sure to keep notes so that when you hit on a good sharpening routine, you can replicate it.
The following image show progression from video to raw stack to sharpened image. The first is the highest quality single frame, as judged by Autostakkert, of Jupiter in green light on December 14, 2012. The second is a raw stack of 40% of 17996 frames. The third is that stacked image after wavelet sharpening in Registax. Finally, an RGB image utilizing that G channel is included. Clearly, wavelets and deconvolution play a large role in producing a detailed final image. But, like any powerful tool, it can be overdone.
Astraimage is also a good program for aligning R, G and B channels into an RGB image, or L, R, G and B channels into an LRGB image. While it isn’t free, it’s worth the money. It is also a good program to adjust levels, crop, etc.
A program like photoshop is quite useful (and often necessary) to clean up the final image. There aren’t as many good freeware options here (GIMP is, perhaps, the only one). But if you have photoshop, or any of the higher end astroimaging programs they can be used.
Processing is, in many respects, a personal expression of the image. Moreover, it is complex enough that a number of books have been written about the subject. So, I won’t go into much detail here. As always, the key is to experiment and play with the software and be honest with yourself regarding the image quality.
A last (free) program that is very useful is WinJupos. This program does enough that I won’t even begin to describe all of it. It can be used to combine multiple images in such a way that planetary rotation doesn’t blur all the detail. It can also be used to calculate the central meridian (CM) of a planet at a given time, predict the location of planetary satellites, make maps of planetary surfaces from images, combine images into RGB or LRGB images, etc. It’s cool. But if you’re to the point of using WinJupos, there are really good tutorials out there and…well, you’re no longer just getting started, so you’ve gone past this guide.
Finally, you can use a wide variety of equipment and cameras to produce solar system images. The imaging can be done without regard to light pollution or the moon. Almost any image made with 2013 technology will blow away the very best amateur images up until very recent times. With the abundance of freeware supporting the field, it is a relatively inexpensive imaging area to enter (though, as in any imaging field, you will find money start to flow away from you the longer you stay in it). Feel free to experiment and to search online for details to fill out the above guide. There are hours worth of reading freely available under each category above.
Once you have an image, what should you do with it? You could simply tuck it away in your computer, or email it to a few friends, but it seems like there should be more. You can, of course, post on the many online astronomy forums (such as Cloudynights) but the most useful thing you can do is post the image to one (or more) of the many astronomical societies who compile planetary and lunar data. These include: ALPO (Association of Lunar and Planetary Observers), PVOL (Planetary Virtual Observatory and Laboratory) and the BAA (British Astronomical Association). Having your images – even mediocre images – posted on these sites will allow researchers access in the event they need to know what a planet was doing at a particular point in time. A surprising number of amateurs have made significant contributions to planetary science and that usually starts with images posted to these forums. Be sure to keep good records as to dates and times of your imaging runs.
Clear skies (and good seeing).
The author acknowledges Cloudynights users RedLionNJ, Sunspot, aaube, and WarmWeatherGuy for helpful discussion regarding this article as well as the many denizens of the Cloudynights Solar System Imaging Forum who have been an invaluable resource in the several years I’ve been trying to make such images. Thanks.
- mrlovt, freddiemercury1 and Mike Weiss like this