Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Jupiter w/GRS, Oval BA, Novel NTrZ White Spot [DSLR]

dslr Maksutov planet astrophotography
  • Please log in to reply
57 replies to this topic

#26 ponz

ponz

    Mariner 2

  • -----
  • Posts: 282
  • Joined: 18 Jul 2012
  • Loc: Kansas City, MO

Posted 26 August 2020 - 12:58 PM

Here we go.  I found a much more simple method! lol.gif lol.gif 

https://www.youtube....h?v=cixIEWl0ljY



#27 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 26 August 2020 - 06:38 PM

Here we go.  I found a much more simple method! lol.gif lol.gif 

https://www.youtube....h?v=cixIEWl0ljY

 

 

I definitely don't recommend eyepiece projection through a diagonal! There's no amount of duct tape that can keep that much torque on the optical train straight. Believe me, I've tried! lol.gif

 

BQ



#28 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 26 August 2020 - 07:21 PM

Interesting! I captured some data on Jupiter a few weeks ago that I used for my ham-handed attempt at stacking in AS!3, but I'll have to run through your workflow with Lynkeos. Your single frame is much cleaner than any of mine, but Jupiter is so low on the horizon here that I don't think there's any helping that. I did use a 2x Barlow with my 8" SCT to get to f/20 and shot at 5X live view through BYEOS, but I'll have to pick up one of those eyepiece projection adapters to compare. I'm pretty sure my older Meade Plossls are of better quality than the cheap barlow I bought...

 

Thanks for taking the time to write it up in so much detail!

No worries at all…I feel like I'm writing the practician's guide for Lynkeos—an addendum to the wiki.

 

Altitude isn't everything…last year I went almost the entire season with the jet stream roaring at 60-72 m/s overhead. Here's a sample from last June:

 

Jupiter Bad Seeing Animation
 
But I found that bad seeing is the best teacher. Here's what the workflow spit out that night:
 
post-273658-0-22696500-1561623995.jpg
 
The real magic of the workflow is in steps 3 and 4. Let me know when you're ready to proceed…
 
BQ
 
P.S. While complicated, the workflow is quite fast. I can pump out a single stack in 5 minutes—so from start of capture to pretty picture is less than 10 minutes. It takes me longer than that just to calibrate a 1-2 hour stack for a bright DSO—heck, it takes longer to just move 2 hours of RAWs from the SD card to the laptop than it takes to stack a planet!

Edited by BQ Octantis, 26 August 2020 - 09:28 PM.


#29 DubbelDerp

DubbelDerp

    Apollo

  • *****
  • Posts: 1,496
  • Joined: 14 Sep 2018
  • Loc: Upper Peninsula of Michigan

Posted 26 August 2020 - 07:53 PM

That’s pretty mind blowing... the data I got a few weeks ago isn’t walking all over like that, but I couldn’t get nearly that amount of detail out of it. I’ll give your workflow a try as soon as I get a chance. I’m building a greenhouse over the next few days, but hopefully I can give it a try after that. 
 

This is the best I could do with AS!3 and registax... it’s quite bad. If you’d be willing, I’d be really interested in uploading one of the video captures I got. It would be great  to see what could be done by someone who knew what they were doing...

 

Attached Thumbnails

  • 7706F8C4-3D30-4368-A470-40928F9EDA2A.jpeg


#30 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 26 August 2020 - 09:44 PM

That’s pretty mind blowing... the data I got a few weeks ago isn’t walking all over like that, but I couldn’t get nearly that amount of detail out of it. I’ll give your workflow a try as soon as I get a chance. I’m building a greenhouse over the next few days, but hopefully I can give it a try after that. 
 

This is the best I could do with AS!3 and registax... it’s quite bad. If you’d be willing, I’d be really interested in uploading one of the video captures I got. It would be great  to see what could be done by someone who knew what they were doing...

By all means! If I can find a free, anonymized hosting service that can handle .mp4s (Astrobin and CN can't), I can upload an .mp4 example of bad seeing and one of good seeing. But a video is really only necessary for step 2 (the analog would be all the RAWs vs. an unstretched 16-bit TIFF for DSOs). Several users in the solar system imaging forum (e.g., Tulloch's post here) have uploaded 16-bit TIFF or PNG stacks (though from planetary cameras, not DSLRs) for practice starting at step 3—and CN happily hosts those. I can even post a 16-bit TIFF of the test image above if you're interested. But as one of my best seeing captures, there's not much to learn from that one on steps 3 and 4…so maybe the bad one from last year?

 

BQ



#31 DubbelDerp

DubbelDerp

    Apollo

  • *****
  • Posts: 1,496
  • Joined: 14 Sep 2018
  • Loc: Upper Peninsula of Michigan

Posted 26 August 2020 - 10:08 PM

Thanks! I’ll take a look at those examples. I think my video files are in .avi format. I’ll take a look tomorrow to see if Lynkeos can handle them, and if not I’ll convert them to mp4 before uploading to my google drive folder.  I’d be interested in trying any data you’d be willing to share! 



#32 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 26 August 2020 - 10:46 PM

I think the typical pedagogical method is to inspire confidence with an easy exercise, tear down that confidence with a near-impossible exercise, and then rebuild with a final, achievable-but-real-world exercise. So I can create those three stacks. The stack I chose for this exercise was the easy one; here's that stack (a 16-bit per channel PNG—it turns out CN can't actually handle a TIFF):

 

https://www.cloudyni...412_1152564.png

 

See what you can do with your Registax workflow; I'll work on documenting steps 3 & 4…

 

BQ



#33 cdndob

cdndob

    Apollo

  • -----
  • Posts: 1,222
  • Joined: 28 Jul 2006
  • Loc: The Great White North

Posted 26 August 2020 - 11:04 PM

Altitude isn't everything…last year I went almost the entire season with the jet stream roaring at 60-72 m/s overhead. Here's a sample from last June:

Hey that's about the same as my average my jet stream recently! lol 

That animated gif does look rather close to my bad seeing days, like you're shooting Jupiter underwater.



#34 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 30 August 2020 - 05:55 AM

No takers on the PNG stack?

 

If you think my workflow is complicated, check this one out!

 

https://www.thelondo...anetary-imaging

 

BQ



#35 calypsob

calypsob

    Fly Me to the Moon

  • *****
  • Posts: 6,346
  • Joined: 20 Apr 2013
  • Loc: Virginia

Posted 30 August 2020 - 01:03 PM

The seeing was good, but not entirely stable:

Screen Shot 2020-08-24 at 9.47.02 PM.png

This was the longest continuous session I've ever assembled. Over a typical 10-20 minute session, I can use a single set of convolution/wavelets to equalize the sharpness across the session. But over this two hour session, it took four different sets. The big difference here was that it went from meh to spectacular (as in 1-pixel deconvolution and no wavelets!) to very good and back to meh. So the trick was finding those transition points.

This is my setup:

post-273658-0-96655700-1556698565.jpg

I do 5× zoom LiveView capture over USB with AstroDSLR 1.3—version 1.3 peaks at 9.5 fps. I shoot 200 second intervals; the output is 54MB .mp4 files, each with ~1900 1024x680 frames, all of them key frames. After alignment (I use Lynkeos 2.10 for its speed), I downselect to the best ~1024 frames for stacking. The rest is just sharpening and histogram tricks.

BQ


How did you get the data to map the seeing?

#36 calypsob

calypsob

    Fly Me to the Moon

  • *****
  • Posts: 6,346
  • Joined: 20 Apr 2013
  • Loc: Virginia

Posted 30 August 2020 - 01:13 PM

Ok, just remember you guys agreed to it. Just realize that every planetary imager will tell you everything in the workflow is wrong from top to bottom. So be it. I like the images it produces.

The flow starts with good seeing and good altitude on the planet. If it's below 30˚, it's going to be quite difficult to get good results. Ironically, zenith isn't a panacea either—the sweet spot seems to be between 40 and 60˚. I can only assume this is because of the rapid volume of air increase as you decrease altitude from 40˚ to the horizon and the fast increase in tangential air mass velocity as you increase from 60˚ to zenith. Chromatic dispersion by the atmosphere is also a problem below 40˚. The jet stream velocity is a big factor, too—I rarely even set up for planetary if it's above 50 m/s. And I've started setting up well away from my house—the metal roof gives off a tall plume that takes hours to settle after sunset.

I already showed you my setup. With the 12.5mm eyepiece, I estimate I'm shooting at ~f/58. Yes, I meant to write f/58. I actually have no idea what the focal ratio is—I just try to get as many of the 1024×680 pixels of the 5× zoom across Jupiter. And don't go quoting Nyquist on me—we had an intense debate on resolution vs. detection over in the planetary forum and concluded that an aperture can easily detect features to at least 10× Rayleigh—so there is your reference spatial information rate! I'm sampling somewhere around 4.4× Rayleigh, so I'm nowhere even close to the Nyquist criterion for that. And at Ts = 1/30sec, Jupiter has the SNR for f/58. Don't believe it? Look at the image. Truth be told, I shim the eyepiece with rubber bands, like this:

post-273658-0-83980600-1561769852.jpg

But this is just to make Jupiter a little smaller so I can keep it on the sensor during poor seeing and gusty ground winds that conspire to drive it off the sensor crop—a frequent occurrence in the outback.

The capture and processing workflow is all simple after that:

gallery_273658_12412_28691.jpg

I included the version numbers for the software I use. AstroDSLR and Lynkeos are consciously older versions, simply for their speed (their subsequent upgrades were catastrophic).

Capture in AstroDSLR is fairly straightforward:

(Click for full size.)
gallery_273658_12412_280636.jpg

Note that I'm using AWB vice daylight. This is because we're getting an 8-bit JPEG off the LiveView and not a 14-bit RAW. When I capture with Daylight, the blues and reds are quite compressed in the stack. Planetary targets are in full sun, and AWB makes better (or at least more complete) use of all three histograms. I use the ISO12800 setting because it gives me the greatest control on the actual gain of the sensor. Yes, we're actually only controlling ISO—the capture settings are simulated because the actual Ts (at least for LiveView off the 600D/T3i) is fixed at 1/30sec. For the optimal gain, I just adjust the Ts setting until the histogram max is just under 50,000. For this capture, Ts was 1/200sec. By simple math, this was a gain of ~ISO1920; with the eyepiece unshimmed, I'm usually at 1/160sec or ISO2400. Make sure you click the Zoom button to get 5× zoom. And click the preview downscaling button—with the upscaling button, capture peaks at ~8 fps; downscaling takes it up to ~9.5. With all that set, just do a 200sec recording. Since I display my time with seconds on my menu bar, I just add 3 minutes and 20 seconds from after I clicked the record button and clicked "Save".

Once you've got the .mp4 saved, fire up Lynkeos 2.10. Drop the .mp4 on the window, and it will parse out all the frames. A curious note about Lynkeos: it will only load key frames. AstroDSLR seems to store every frame as a key frame, but the in-camera video does not. So if I want to use all the frames from an in-camera video at 30 fps, I actually have to rip the .mov into frames with a ripper. This approach is much simpler, and I get much better results. So here's the .mp4 loaded and parsed:

gallery_273658_12412_67912.jpg

For alignment with Lynkeos, I find the alignment frame size needs to be about 50% bigger than the target's largest extent. It can be bigger—up to 300 pixels for a 5-pixel star—but it seems to be most accurate at 50%. In this case, that was 700 pixels. If your Jupiter drifted outside of that, start with a bigger box for a first pass, and then do it again with a 150% box.

A note about speed. If you want to watch the images update during the alignment, it can be mesmerizing to watch them all fly by. But if you want speed—as in the full stack aligned in seconds, turn this feature off in the preferences:

gallery_273658_12412_55493.jpg

Once aligned, we move on to downselection based on quality.

gallery_273658_12412_29548.jpg

I haven't figured out where the default settings come from, but the low setting of 0.08 doesn't correspond to anything I've ever shot. Just click the preview check box and increment the low frequency down until you find where you get the most interpixel details in the Fourier Transform view of the image. For mine, this was at 0.01. Then put the sampling box on the greatest extent you can put on the disk—in this case, this was 300 pixels. Note that if you scroll down through the images, the Fourier image is out of alignment. This is because Lynkeos does the Fourier Transform on the original frame, not the alignment. So just make sure you're using your "Reference" alignment image (typically the first one, unless you selected a different one during alignment). Unclick the "Preview" check box and click "Analyze". Seconds later, you'll have useful quality metrics:

gallery_273658_12412_49286.jpg

Once you've got your metrics, you just find the cutoff value that gives you ~1024 frames. Another amusing oddity about Lynkeos: it can't stack 1024 frames. It will tell you it can, but when you get to stacking, it just won't finish. Like a soft divide by zero somewhere. So I target the value that gives me the lowest value above 1024 frames, which in this case gave me 1030. I promise you, 6 images won't make a difference in the stack. You can use the slider, but I just do this manually; I increment by 10, then by 5, then by 1, then by 0.1, and then by 0.01 to find the value. Once you've downselected, you're ready to stack:

gallery_273658_12412_99529.jpg

Stacking is as easy as it sounds. But if you have several images that you want to batch process (like I did the ones in the first post), you probably want to use a common size for all of them. In this case, 720×600 framed it nicely.

If you want to save your work, now is the time to do so. Once you stack the image, Lynkeos will include the stacked frame in the project file if you save it. Oddly enough, you can't actually get to the stacked image—indeed, the stack is lost the moment you go to another pane, so you then need to restack. So if you want to save several MBs per space for the settings file this is the point to save it. (It makes a much bigger difference for RAW stacks of the deep sky.)

Click Stack, and seconds later you'll have a stacked image:

gallery_273658_12412_92282.jpg

Save this as a 16-bit TIFF, and we're ready for the third step in the workflow…

Assuming there's still interest after all thatsmile.gif

BQ


Wow lynkeos is so intuitive in its quality analysis.
I find registax and autostakkert to be vague in this regard.
Autostakkert especially, I have no idea what the heck is going on when I use that program. My wife has a mbp, might give lynkeos a go with an asi camera.

#37 DubbelDerp

DubbelDerp

    Apollo

  • *****
  • Posts: 1,496
  • Joined: 14 Sep 2018
  • Loc: Upper Peninsula of Michigan

Posted 31 August 2020 - 09:59 AM

No takers on the PNG stack?

 

If you think my workflow is complicated, check this one out!

 

https://www.thelondo...anetary-imaging

 

BQ

I'm certainly going to take a crack at it! I've been offline for a few days, but hope to have some time to play around with it this week.



#38 DubbelDerp

DubbelDerp

    Apollo

  • *****
  • Posts: 1,496
  • Joined: 14 Sep 2018
  • Loc: Upper Peninsula of Michigan

Posted 01 September 2020 - 09:23 AM

Well I went through the workflow, and I have to say that Lynkeos does a really nice job of aligning and grading the individual frames. I haven't used AS!3 enough to know whether it does any better or worse, but once the minimum frequency was set, I was able to confirm that the higher scored frames were indeed the better frames for stacking. 

 

Unfortunately it left me with a very blurry image of Jupiter, but I believe that's the fault of my data collection. I'll need to work on that first, I think. I did order the eyepiece projection adapter, so once that gets here I'll give it a go.

 

Here's where I am so far:

jup1.png

 

Compared to the stack you posted, this is just a blurry mess. Looking forward to parts 3 and 4 of the workflow, though! 



#39 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 02 September 2020 - 03:02 AM

Here's where I am so far:

 

Compared to the stack you posted, this is just a blurry mess.

Crikey, mate! I'm not sure you can pull much more than broader structures out of that stack (you'll have to post a 16-bit per channel PNG if you want me to try). If the seeing was really extreme, you might try stacking far fewer—maybe 256 to 500. The extreme spreading of the Airy disk by the atmosphere will make the largest image achievable much smaller. But the good news is that this works synergistically with less frames needed to achieve good image quality for a smaller image (just like DSOs).



#40 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 02 September 2020 - 03:41 AM

Step 3 is a pre-processing step that does two things. First, it centers the histogram for all of the subsequent sharpening algorithms to bring out the detail we want. Most sharpening algorithms work by pushing the center of the histogram outward—things to the right of center get brighter and things to the left of center get dimmer. Sure, you could use the exposure to set the midpoint—in step 2, you could just set the output level to 255 vice let Lynkeos select it automatically. But you'd also be subject to the whims of the atmosphere and the altitude of the planet. Letting Lynkeos select it makes it completely repeatable.

 

So for 3a, I simply do a Levels with a gamma of 0.5, max output = 235. This could be sufficient, but I found I want less contrast out at the edges so the final doesn't sharply fade to black. And I don't want the highlights at the center of the disk to blow out. So I follow the Levels with a Curves with the points (0,0), (19,22), (159,142), (255,240):

 

gallery_273658_12412_60450.jpg

 

And here's the output:

 

gallery_273658_12412_31515.jpg

 

For 3b, we're accommodating the fact that each of the channels is affected differently by the atmosphere, the optic, and the sensor. This results in each channel being able to handle different levels of sharpening. While you could process each channel completely separately, just a single pass of Smart Sharpen on each channel to bring them to roughly the same sharpness makes a single pass of identical deconvolution and wavelets across all channels much more optimal. I've found that the green and blue channels can handle similar point spread parameters (typically with blue wanting slightly less in amount), but red seems to normally want much less sharpening (about half) and at a smaller pixel radius (typically 0.5 pixels less).

 

To find the optimum pixel radius, I start with the green channel. I'll pull up the Smart Sharpen window and zoom the preview to 100%. I set the Shadow parameters to Fade Amount = 30%, Tone Width = 50%, Radius = 4px, and the Highlight parameters to Amount = 0%. I then temporarily set the Sharpen Amount = 400% so I can scroll through the Radius by 10 pixels to find the size at which I maximize Jupiter's atmospheric details at the smallest level of detail possible. I'll then increment 5 pixels up and 5 pixels down to refine it further. And then up and down by 1 pixel. For my image, I found the optimum pixel size for green to be either 6.6 or 6.7:

 

gallery_273658_12412_27925.jpg

 

If we were just to use Smart Sharpen for the image, we could then just find the optimum amount and be done with it. And that would be fine if the seeing were quite static with just one turbulence component. But it's not, so wavelets—equivalent to 8 Smart Sharpens—are much more helpful. Using Smart Sharpen for just the first order component makes the wavelet parameters more uniform across all channels, as long as their sharpness levels are similar at the end. For a repeatable metric that works for all channels, I use the details on the NPR. So I recenter the preview on the NPR and then scroll through the Amount by 50% increments until I find the point where I can just start to make out the structure of the NPR details over the blurriness. And then I increment by 10% to hone it. For my image, I found this to happen for green at 230%:

 

gallery_273658_12412_64440.jpg

 

Repeating for the red channel, I concluded the optimum Smart Sharpen parameters are 160% at 6.2 pixels (with the same fade parameters). And for blue it is 230% at 6.1 pixels (this happened to be a rare time that the blue pixel size was smaller than the red). It then helps to click through the channels to ensure the NEB, SEB and NPR details are about the same to the eyeball across all the channels. If so, we can move on.

 

The last step (3c) is to align the channels. I just find a detail away from the limb on which to align the channels. In this case, the little dark spot to the east of the GRS is ideal—these tend to be gray spots. For a flattened image, you just select the move tool, and you can move the channel around with the arrow keys. Leave the green where it is, and align the red and blue to it with the arrow keys. For my image, I found I needed to move the red to the right by 1 pixel and the blue to the left by 1 pixel.

 

Here's the final output for Step 3:

 

gallery_273658_12412_34056.jpg

 

So you might be wondering why not just do it all in Photoshop. After all, Smart Sharpen is a very powerful tool. The answer is simply, artifacts. Smart Sharpen breaks the image into squares with sharp ridges, and an all-Photoshop workflow results in sharp lines that appear in the image—and the worse the seeing, the more pronounced they are in the final. But by combining different sharpening methods, I find that artifacts tend to cancel each other out while the details constantly improve.

 

BQ


  • DubbelDerp and b34k like this

#41 DubbelDerp

DubbelDerp

    Apollo

  • *****
  • Posts: 1,496
  • Joined: 14 Sep 2018
  • Loc: Upper Peninsula of Michigan

Posted 02 September 2020 - 09:43 AM

Crikey, mate! I'm not sure you can pull much more than broader structures out of that stack (you'll have to post a 16-bit per channel PNG if you want me to try). If the seeing was really extreme, you might try stacking far fewer—maybe 256 to 500. The extreme spreading of the Airy disk by the atmosphere will make the largest image achievable much smaller. But the good news is that this works synergistically with less frames needed to achieve good image quality for a smaller image (just like DSOs).

Indeed, I'll give it a try stacking fewer frames, but I think it's more a combination of a poor quality barlow, poor focus, and how low it is over the horizon - only about 25 degrees. When adjusting focus, it never snaps into good focus, but passes through varying degrees of blurry. But my new eyepiece projection adapter and threaded SCT visual back should be here just in time for the rain clouds coming up next week. I suspect there are alignment issues with the non-threaded barlow, since the planet moves around when I adjust focus. Hopefully I'll see some improvements with a better quality eyepiece.



#42 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 02 September 2020 - 02:59 PM

Indeed, I'll give it a try stacking fewer frames, but I think it's more a combination of a poor quality barlow, poor focus, and how low it is over the horizon - only about 25 degrees. When adjusting focus, it never snaps into good focus, but passes through varying degrees of blurry. But my new eyepiece projection adapter and threaded SCT visual back should be here just in time for the rain clouds coming up next week. I suspect there are alignment issues with the non-threaded barlow, since the planet moves around when I adjust focus. Hopefully I'll see some improvements with a better quality eyepiece.

When I was first experimenting, I also had a low-quality Barlow that just made things worse. Then I borrowed a Plössl set to experiment with magnifications, and even projection was those was far superior to the low quality Barlow. I invested in narrow FOV orthoscopics, which keep the rays tightly focused across the scene, which gives jet black backgrounds around the planets compared to the Plössls. So hopefully your upgrade will yield a similar improvement. I also found that at higher magnification, the depth-of-field is greater, so even with manual focusing, I can find peak focus quite easily—assuming the seeing cooperates. And 25˚ is indeed a challenge!

 

BQ



#43 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 04 September 2020 - 07:18 PM

I came across an interesting post comparing the lens configuration and ray diagrams of a normal Barlow vs. a Powermate:

 

post-86050-0-66633200-1494860286.jpg

[Source]

 

Is that what you ended up getting?

 

Separately, I used this diagram to pick my orthos over any other eyepiece design for my planet killer (Figure 213):

 

https://www.telescop...berration_2.htm

 

If you compare the on-axis spread between the Plössl and the Abbe, they are quite similar; off-axis, the Abbe is superior to all. I would love to see a similar diagram for a Barlow and a Powermate for an apples-to-apples comparison…

 

BQ


Edited by BQ Octantis, 05 September 2020 - 06:25 AM.


#44 b34k

b34k

    Lift Off

  • -----
  • Posts: 11
  • Joined: 30 Jan 2020
  • Loc: San Digeo, CA

Posted 06 September 2020 - 07:22 PM

I think the typical pedagogical method is to inspire confidence with an easy exercise, tear down that confidence with a near-impossible exercise, and then rebuild with a final, achievable-but-real-world exercise. So I can create those three stacks. The stack I chose for this exercise was the easy one; here's that stack (a 16-bit per channel PNG—it turns out CN can't actually handle a TIFF):

 

https://www.cloudyni...412_1152564.png

 

See what you can do with your Registax workflow; I'll work on documenting steps 3 & 4…

 

BQ

Well, looks like I'm a little late to the party, but I've been trying to really up my post-processing game over the past few weeks, so I figure if someone's offering me data that's way better than I can get with my current scope, I should try my hand at it!  Here's my best shot at using my standard post-stacking workflow (registax -> photoshop).

 

gallery 273658 12412 1152564 W Ps

 

And now that you've posted the secrets of part 3 of your workflow, I've tried applying that before wavelet sharpening... Holy moly what a difference!

 

gallery 273658 12412 1152564 Pp W Ps

 

Part of the issue is that so much of the color information is compressed in the upper end of intensity levels in the original image. So without the initial gamma adjustment, going straight to wavelet sharpening blew out a lot of the color information. The pre-shaprening and color alignment also seem to add a lot to some of the finer details. Thanks for sharing this info, super helpful to have in my arsenal of processing techniques!



#45 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 06 September 2020 - 07:57 PM

Well, looks like I'm a little late to the party, but I've been trying to really up my post-processing game over the past few weeks, so I figure if someone's offering me data that's way better than I can get with my current scope, I should try my hand at it!  Here's my best shot at using my standard post-stacking workflow (registax -> photoshop).

 

And now that you've posted the secrets of part 3 of your workflow, I've tried applying that before wavelet sharpening... Holy moly what a difference!

 

Part of the issue is that so much of the color information is compressed in the upper end of intensity levels in the original image. So without the initial gamma adjustment, going straight to wavelet sharpening blew out a lot of the color information. The pre-shaprening and color alignment also seem to add a lot to some of the finer details. Thanks for sharing this info, super helpful to have in my arsenal of processing techniques!

Well I was afraid this would happen…somebody actually made it to the end of step 3! Now I have to start working on documenting step 4. So much for procrastination! laugh.gif

 

Not to be critical, but I've often wondered about the many planetary images I've seen lacking finer details. It seems like most processing workflows rely on the focal ratio of the setup to set the brightness and contrast at the start of the workflow—so the tradeoff for the imager is extension tubes, camera gain, or a new camera with a different pixel pitch. I also wonder if many imagers are locked into the notion that the Rayleigh (or even Dawes) criterion is the limit to planetary detail (which is simply not the case) so they don't try anything different…?

 

BQ


Edited by BQ Octantis, 06 September 2020 - 10:47 PM.

  • b34k likes this

#46 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 06 September 2020 - 10:46 PM

Here is my starting point for Step 4. It is a 16-bit PNG for the sake of CN; my typical format between apps is 16-bit TIFF. Lynkeos can handle either.

 

https://www.cloudyni...2412_619756.png

 

The two things we're applying in Step 4 are deconvolution and wavelets—in that order. To start, simply open the file in Lynkeos and click on the deconvolution pane. The deconvolution pane has just two parameters: pixel radius and threshold. I start with a threshold of 0.3 and then step through the radius in whole-digit increments, followed by 0.50 digit increments, followed by 0.10 digit increments to converge on the optimal size. Much like in Step 3, I'm searching for the smallest radius that brings out the detail on the planet. Too much, and the structures are thick and dark. Too little, and they're wispy and not intact. Once I've found it, I then step through the threshold in increments of 0.10, followed by 0.05, followed by 0.01. Again like in Step 3, I'm looking for the sweet spot between good detail and too much noise. I switch between 100% and 71% scaling to evaluate the parameters at both scales, knowing that my typical final for Jupiter is 50-100%, depending on seeing. For this one, I settled on a radius of 2.4 pixels and a threshold of 0.35:

 

gallery_273658_12412_60570.jpg

 

In processing, any step is a potential cutoff point. Deconvolution is a great tool, and its output essentially sets the extent of the larger structures for the final image—so this is a reasonable exit point in the workflow. But much like Smart Sharpen, too much deconvolution will create artifacts—particularly the edge rind artifact due to the Gibbs effect. And proceeding to wavelets allow you to fine tune the contrast structures at multiple scales—to include lessening the amount of sharpening imparted by the previous steps.

 

So continuing to the Wavelets pane, I set the number of wavelets to 8. I find that I want greater wavelet scale step resolution ("Step") than the default of 2, so I go with 1.5. This automatically populates the spatial frequencies (Fréq) of the wavelets. I don't change them.

 

Screen Shot 2020-09-07 at 12.38.01 PM.png

 

The topmost wavelet (with a Fréq of 0,0000) is an overall brightness setting. For my capture setup and workflow up to this point, I find a setting of 0.80 to consistently set the brightness where I want it for Step 5. The next wavelets are fine tuning of the point spread at the next larger scale. The easiest way to converge on the ideal setting for a wavelet is to start by incrementing to 0.50 and then to 2.00. Toggling between those two settings, it should be obvious which direction to go. And Wavelet 2 is where things get random—the "ideal" parameter is totally dependent on seeing. Often, Wavelet 2 is down (less than 1.00), but sometimes it's up. So after the initial 0.50/2.00 toggle, I increment in the indicated direction by 0.10 and then by 0.01. Wavelets 3 and 4 are often optimal right around 1.00, but again, I start with a toggle between 0.50 and 2.00 to find the direction to go.

 

Wavelets 5, 6 and 7 need to be considered in tandem. The effect starting at Wavelet 4 is to pull details into little dots, but it's larger effect is on larger structures (so its optimal setting rarely imparts dots) but Wavelets 5, 6, and 7 will, and the subsequent wavelet splits up the prior wavelet's dots. So the set of the three can handle being pushed a little more than each individually would indicate. Wavelet 7 needs to be evaluated at 100% image scale—it can impart too much speckle at 100%, which will get masked at smaller image sizes. Wavelet 8 is typically 1.00.

 

Again, I evaluate the parameters at multiple image scales. For wavelets I'll typically work at 71%, and check at 100%, 50%, and 35% scale. At 100%, the image will be quite grainy, but it should be smooth at 35%. Its acutance should be "in the ball park" at 50% and 71%. Here are the parameters I settled on:

 

gallery_273658_12412_51677.jpg

 

Saving it off as a TIFF, it's ready for Step 5. Here is the 16-bit output file (png):

 

https://www.cloudyni...2412_666210.png

 

BQ


Edited by BQ Octantis, 06 September 2020 - 10:51 PM.

  • DubbelDerp and b34k like this

#47 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 08 September 2020 - 06:51 PM

On the link ponz provided there was a selection for a 1.25" Fixed camera adapter.

I was about to recommend this to someone, and now it's gone—there's just the 2-inch and the Deluxe Tele-Extender. It just goes to my point on these fly-by-night drop shippers…

 

BQ



#48 DubbelDerp

DubbelDerp

    Apollo

  • *****
  • Posts: 1,496
  • Joined: 14 Sep 2018
  • Loc: Upper Peninsula of Michigan

Posted 09 September 2020 - 07:53 AM

I probably got the last one in stock...

 

I think I have everything I need. SCT - M42 adapter, eyepiece projection adapter, eyepiece (this one's a 17mm Plossl), t-adapter:

IMG_9633.jpg

 

Assembled, it's a nice little bundle:

IMG_9632.jpg

 

Clear skies hopefully Friday, so I'm going to give this a shot.



#49 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 09 September 2020 - 08:01 AM

I probably got the last one in stock...

 

I think I have everything I need. SCT - M42 adapter, eyepiece projection adapter, eyepiece (this one's a 17mm Plossl), t-adapter:

attachicon.gifIMG_9633.jpg

 

Assembled, it's a nice little bundle:

attachicon.gifIMG_9632.jpg

 

Clear skies hopefully Friday, so I'm going to give this a shot.

Woohoo! Rock on! rockon.gif

 

BQ



#50 BQ Octantis

BQ Octantis

    Skylab

  • *****
  • topic starter
  • Posts: 4,417
  • Joined: 29 Apr 2017
  • Loc: Red Centre, Oz

Posted 11 September 2020 - 03:12 AM

Wait, you've got more than just a 17mm, right? I shoot Saturn at 18mm for the SNR boost, but my Mak OTA is an f/15. Jupiter has enough signal to shoot with my 12.5mm EP. Assuming the laws of physics hold, the equivalent eyepieces would be 12mm and 8.3mm, respectively. Based on experiments with my f/6 SCT, I'd recommend starting with a 9mm EP for Jupiter and a 12mm for Saturn for an f/10 OTA.

 

BQ

 

P.S. Does this mean I need to start documenting Step 5? laugh.gif




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: dslr, Maksutov, planet, astrophotography



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics