Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Mars 11/24/20 (with much trepidation)

  • Please log in to reply
24 replies to this topic

#1 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 25 November 2020 - 11:31 AM

Chastened by recent posts of 'worst planetary photos' looking better than many of my best, I'll dip my toe in the water after a long hiatus.

 

2 weeks ago I received my new ASI224MC and have been waiting for 'magical seeing' to come close to what many here post routinely. Paired with a 10" Meade LX200 SCT operating at f20, I took a number of 1 minute long color videos at 200fps. Stacking in AutoStakkert 3.1.3; wavelets in Registax 6.1.0.8 and Photoshop CS5. Collimation is decent. Focus is always a challenge when working at F20 and the atmosphere boils. I receive a Bahtinov mask today so maybe that will add some quantitative input into an otherwise 'hunt and poke' exercise.

 

AutoStakkert runs much more quickly that does the same function in Registax so that's something, but I still haven't found an explanation of the 'Quality Graph' so as to make informed decisions as to how many frames to stack of a 12,000 frame SER file. I'm guessing the 'green' s-curve is a normalized ranking of 'sharpness' (frequency content of the images) and the 'grey' scatter line is the 'relative motion' of the centroid of the center of mass? Should we expect a correlation between the two? eg. when the atmosphere is stable, the centroid is motionless and the sharpness will be high?

 

Moving on to Registax wavelets, processing proceeds as a 'monkey' would paint the Mona Lisa. After n-factorial random clicks and adjustments of sliders, a picture that resembles either 'pumpkin pie' or an 'orange bowling ball' emerges ... neither come close to what I expect a 10" aperture should yield.

 

Last night's seeing was average with high haze filtering in from the west south west causing the temperature to remain around 32F for the evening ... which for Connecticut typically spells better than average seeing. And yet I still get 'pumpkin pie' for an image. I've included images from the same SER file:  1) 600 frames 2) 2500 frames. As expected the 2500 frame stack is a 'smoother' image, though for my money the 600 frame stack, though 'noisier' shows a more faithful rendering of detail, i.e. the polar cap is round as opposed to triangular. Again, I think the clue here has to do with being able to understand the 'quality graph' and know where to set the optimum number of frames to stack as a function of seeing. I include a screen shot of the the AS3 setup screen just in case something jumps off the page to anyone in the know.

 

Can anyone direct me to a tutorial that actually explains what the software is doing as opposed to a 'cookbook' recipe for mediocrity?

 

Many thanks in advance

 

Bob R

Attached Thumbnails

  • Mars_192056 600.jpg
  • Mars_192056 2500.jpg
  • Untitled-2.jpg

  • Magellanico, Scott Mitchell, mikewayne3 and 2 others like this

#2 Borodog

Borodog

    Apollo

  • -----
  • Posts: 1,016
  • Joined: 26 Oct 2020

Posted 25 November 2020 - 12:26 PM

I wrote a long post and lost it. :Op Bah.

 

Anyway, I am following because I feel the same way lately.

 

Some questions:

 

1) What were your camera settings?

2) What are your alignment point settings?

3) Was your scope down to temperature when you shot?

4) Did you collimate your scope via camera with the actual optical path you imaged with? You don't want to collimate with a diagonal in and shoot with it out, for example.

 

Some things you might try next time:

 

1) Let the scope get down to temperature

2) If the Moon is up, use it to set your focus

3) Once focussed, quickly image, stack, and post-process a star to check your collimation. You should only need of order a thousand frames or so, and maybe 5 alignment points, 4 in the corners of the diffraction pattern and one large one surrounding the whole pattern. Use the gamma correction in Registax to bring out the fainter diffraction rings and make sure they are centered on the star.

 

Full disclosure: I have been doing this for less than two months and have no idea what I am doing. I'm making this **** up as I go.


  • Bob R. likes this

#3 SarverSkyGuy

SarverSkyGuy

    Explorer 1

  • *****
  • Posts: 90
  • Joined: 28 Jun 2010

Posted 25 November 2020 - 12:43 PM

I can't answer your questions but I can add a point of comparison.  I think focus might be your issue.  I have the same optical setup and the same camera. I use PIPP to "debounce" the image in the avi file, then process as you do.  In Registax I typically move the sliders into the 80's or 90's.  In slightly better than normal seeing this is what I got....

 

Mars 2020-10-22 .jpg

 

 


  • Bob R. likes this

#4 RedLionNJ

RedLionNJ

    Skylab

  • *****
  • Posts: 4,193
  • Joined: 29 Dec 2009
  • Loc: Red Lion, NJ, USA

Posted 25 November 2020 - 12:45 PM

You told AutoStakkert to use APs when determining the 'best frames' - it would be helpful (as Borodog asserts) to see a sample of the AP size/distribution you used for this.

 

 

As far as interpreting the graph, it's really, really straightforward (unless I've been making mistakes all these years):

 

The grey points (usually forming jagged lines) represent the quality (sharpness) data in chronological order.

The green curve represents the quality data in decreasing quality order

 

For a OSC cam, I'd be looking to incorporate at least 5000 frames in my stack. You will likely not be able to stack only 600 without significant artifacts arising when you post-sharpen. Bear in mind you don't need all the included frames to be 'sharp' - you need them to be uniformly blurry by similar amounts and figures. That way, you can wavelet the resulting stack to sharpen with significant success.

 

In less-than-optimal seeing, a higher capture frame rate (with accordingly small ROI) can be your biggest friend.

 

 

 

Spend some time watching the preview before you start capturing. Repeatedly tweak the focus (with a hands-off focusing mechanism which does not move the primary) until you are sure you're at the best focus possible. Even then, there will be some short spells of better seeing than others.

 

And being located where you are - try to avoid imaging when the jet stream is nearby!


  • Bob R. likes this

#5 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 25 November 2020 - 12:50 PM

Borodog,

 

Some answers to your questions:

 

1) What were your camera settings? I set the exposure to 5ms and use the gain to get the 'green' channel to about 60% of max. I then go in to the sub camera settings to set 'red' gain to 80% and the 'blue' to 40%

2) What are your alignment point settings? I'm using 5 alignment points giving 5 overlapping boxes for each of the quadrants and the center covering the enter planet including limbs

3) Was your scope down to temperature when you shot? I store the telescope outdoors in an unheated, unattached garage with plenty of ventilation. As this was early in the evening, yes, there could be some tube currents as the temperature dropped 10 degrees at sunset and these images were taken 2 hours later

4) Did you collimate your scope via camera with the actual optical path you imaged with? You don't want to collimate with a diagonal in and shoot with it out, for example. I always check collimation with the same diagonal and barlow with the mirror locked ... defocusing is accomplished at the drawtube.

 

Focusing on the moon is even more confounding though I like your idea of using the diffraction spikes of a star. The Bahtinov mask should achieve the same thing in 'real time'. I'm also thinking about putting a digital indicator on the drawtube so that I can monitor position of focus as the drawtube moves in and out. I hate flying blind and this imaging 'rabbit hole' has so many variables it's impossible to troubleshoot if I'm standing on the proverbial shifting sand of a multivariable optimization problem:(

 

in solidarity ... thank you!



#6 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 25 November 2020 - 05:34 PM

I can't answer your questions but I can add a point of comparison.  I think focus might be your issue.  I have the same optical setup and the same camera. I use PIPP to "debounce" the image in the avi file, then process as you do.  In Registax I typically move the sliders into the 80's or 90's.  In slightly better than normal seeing this is what I got....

 

attachicon.gifMars 2020-10-22 .jpg

Thanks SarverSkyGuy ... I use the autocentering feature in FireCapture which I think accomplishes the same 'debounce' of the image you perform with PIPP (this is just a guess). I'm thinking this all comes down to jet stream influenced seeing. Last night, though the lower atmosphere was fairly stable, the jet stream was still 70-80 knots:( Such is the predicament of living in North America's atmospheric waste drain ....



#7 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 25 November 2020 - 05:56 PM

192301.jpg

You told AutoStakkert to use APs when determining the 'best frames' - it would be helpful (as Borodog asserts) to see a sample of the AP size/distribution you used for this.

 

 

As far as interpreting the graph, it's really, really straightforward (unless I've been making mistakes all these years):

 

The grey points (usually forming jagged lines) represent the quality (sharpness) data in chronological order.

The green curve represents the quality data in decreasing quality order

 

For a OSC cam, I'd be looking to incorporate at least 5000 frames in my stack. You will likely not be able to stack only 600 without significant artifacts arising when you post-sharpen. Bear in mind you don't need all the included frames to be 'sharp' - you need them to be uniformly blurry by similar amounts and figures. That way, you can wavelet the resulting stack to sharpen with significant success.

 

In less-than-optimal seeing, a higher capture frame rate (with accordingly small ROI) can be your biggest friend.

 

 

 

Spend some time watching the preview before you start capturing. Repeatedly tweak the focus (with a hands-off focusing mechanism which does not move the primary) until you are sure you're at the best focus possible. Even then, there will be some short spells of better seeing than others.

 

And being located where you are - try to avoid imaging when the jet stream is nearby!

Hey RLNJ, thank you for the info on the Quality Graph. Using your input, the QG suggests that the last 15 seconds of the 60 second recording had twice the quality of the first 45 seconds. I verified this by 'playing' the frame list while watching the frame# (the higher quality frames were the ones at the end of the capture) This suggests I should stack 3000 frames of the 12,000 frame recording. As requested I've also included the 5 AP's I'm using. 

 

Looking at the color levels, it looks like I 'blew out' the red channel by going a bit too high on gain (66%). Perhaps, I could improve image noise by dialing back on gain and lengthening exposure to 10ms and sacrifice FPS, i.e. 100 FPS vs. 200? I'm a little resistant to do that given the QG suggests better seeing for approximately 15 seconds. At 100 FPS, that only gives me 1500 frames to stack. That said, to my way of thinking, the purpose of stacking is to build on 'signal'  average out noise. However, if one's 'signal' is 'crap', all we're going to get with stacking is 'smooth looking crap'. So going longer only dilutes 'better signal' with 'less good signal'.

 

I'm beginning to have confidence it isn't in 'this monkey' but more in what I'm looking thru. Do you have any suggestions on atmospheric forecast/data for understanding 'seeing'?

Attached Thumbnails

  • 192301 APs.jpg


#8 Kokatha man

Kokatha man

    Hubble

  • *****
  • Posts: 15,082
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 25 November 2020 - 06:11 PM

Quickly, you need a lot more MAPs boxes than you have here: you also use boxes far too large.

 

I use a series of concentric circles (referencing the red dots at the centres of each MAP box) & typically use approx. 80 of these at size "48"...

 

Your quality graph would suggest poor seeing also: your histogram value for capturing is what determines the amount of gain you employ & our typical histo for the ASI224MC is no more than 60% for the dominant channel (red) although we set it by the un-debayered image's histo.

 

If no-one provides any examples I'll put together a suggestion for a MAPs layout for AS!3 using an equivalent-sized Mars (not at 170% zoom ;) ) later today our time...don't use any masks, get to rely on your eyes' abilities to focus btw!



#9 Kokatha man

Kokatha man

    Hubble

  • *****
  • Posts: 15,082
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 25 November 2020 - 06:18 PM

ps: don't drizzle in poor seeing! ;) 



#10 kevinbreen

kevinbreen

    Mercury-Atlas

  • -----
  • Posts: 2,662
  • Joined: 01 Mar 2017
  • Loc: Wexford, Ireland

Posted 25 November 2020 - 07:16 PM

You told AutoStakkert to use APs when determining the 'best frames' - it would be helpful (as Borodog asserts) to see a sample of the AP size/distribution you used for this.


As far as interpreting the graph, it's really, really straightforward (unless I've been making mistakes all these years):

The grey points (usually forming jagged lines) represent the quality (sharpness) data in chronological order.
The green curve represents the quality data in decreasing quality order

For a OSC cam, I'd be looking to incorporate at least 5000 frames in my stack. You will likely not be able to stack only 600 without significant artifacts arising when you post-sharpen. Bear in mind you don't need all the included frames to be 'sharp' - you need them to be uniformly blurry by similar amounts and figures. That way, you can wavelet the resulting stack to sharpen with significant success.

In less-than-optimal seeing, a higher capture frame rate (with accordingly small ROI) can be your biggest friend.



Spend some time watching the preview before you start capturing. Repeatedly tweak the focus (with a hands-off focusing mechanism which does not move the primary) until you are sure you're at the best focus possible. Even then, there will be some short spells of better seeing than others.

And being located where you are - try to avoid imaging when the jet stream is nearby!


Grant, you say
"You told AutoStakkert to use APs when determining the 'best frames' - it would be helpful (as Borodog asserts) to see a sample of the AP size/distribution you used for this."

??? Have I been doing this wrong for years? I just drop AVIs into AS!3 and hit ANALYSE. Then I set APs.
Maybe I'm just overtired.....

#11 Borodog

Borodog

    Apollo

  • -----
  • Posts: 1,016
  • Joined: 26 Oct 2020

Posted 25 November 2020 - 07:35 PM

Kevin

 

You can either set AS3 to use individual APs to determine the best frames (the default) or "Global" which uses the entire frame. Note that when doing the former, each alignment point might use different frames to stack. I think.



#12 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 25 November 2020 - 09:35 PM

I use a series of concentric circles (referencing the red dots at the centres of each MAP box) & typically use approx. 80 of these at size "48"...

As pictures are worth a thousand words, here's my interpretation of 80 AP's whose centers are located on 4 concentric circles ... is this what you mean?

 

Stacking 2500 such aligned frames and applying wavelets yields pretty much the same result:(

 

As you say, the Quality Graph indicates poor seeing. This statement suggest the 'grey' trace is an absolute assessment of quality? I'm not sure that can be true as it would need absolute knowledge of the spatial frequency content of the object being imaged wouldn't it? Or is it that the quality went from 25% to 50% (a 100% improvement) in the span of 15 seconds indicates variation in seeing?

Attached Thumbnails

  • Mars_192301 80 APs.jpg

  • Kiwi Paul likes this

#13 Borodog

Borodog

    Apollo

  • -----
  • Posts: 1,016
  • Joined: 26 Oct 2020

Posted 25 November 2020 - 09:41 PM

I think I know exactly what your problem is. It's the same as mine has been. I think your exposure is way too short in an attempt to get high frame rates. You are making up for this with gain, thinking, like I did, that the grainy high gain noise would cancel out. It will, but Autostakkert can't figure out which frames to stack because of all the noise. Hence you get a blurry mess. It appears to me now that at least in my own case 10 fps of high signal to noise ratio data is better than 130 fps of low signal to noise ratio data.


Edited by Borodog, 25 November 2020 - 09:42 PM.

  • Bob R. and Kiwi Paul like this

#14 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 25 November 2020 - 10:53 PM

I think your exposure is way too short in an attempt to get high frame rates. You are making up for this with gain, thinking, like I did, that the grainy high gain noise would cancel out. It will, but Autostakkert can't figure out which frames to stack because of all the noise. Hence you get a blurry mess.

 

This is a very strong possibility as high frame rate at the expense of SNR was my goal.  I'm thinking I can prove this out by playing around taking a video of an image indoors with varying amount of light that I can compensate with gain to achieve the same saturation level on the detector. I will process with AS3 and Registax and let the results speak for themselves. Thanks for the suggestion!


  • Kiwi Paul and Borodog like this

#15 Borodog

Borodog

    Apollo

  • -----
  • Posts: 1,016
  • Joined: 26 Oct 2020

Posted 26 November 2020 - 12:15 PM

I don't know that that will be a fair test, as your indoor image will not be boiling from atmospheric turbulence. I think the only way to be sure is to actually test it on Mars on a night with good seeing. That's what I'm planning to do.


  • Bob R. and Kiwi Paul like this

#16 Kokatha man

Kokatha man

    Hubble

  • *****
  • Posts: 15,082
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 26 November 2020 - 06:13 PM

Bob - Post #12

 

I'd move the outer ring out a tad & possibly add another set between the outer & next one in because of the way that AS!3 sets the centre cross when loading. (it isn't central) I try to create as even a spread as possibly but that's not really important tbh.

 

<"As you say, the Quality Graph indicates poor seeing. This statement suggest the 'grey' trace is an absolute assessment of quality? I'm not sure that can be true as it would need absolute knowledge of the spatial frequency content of the object being imaged wouldn't it? Or is it that the quality went from 25% to 50% (a 100% improvement) in the span of 15 seconds indicates variation in seeing?">

 

In your Post #7 the gray saw-tooth plot is how AS!3 has determined the quality of each individual frame of the entire capture from go to whoa, against that of the reference frame it chooses - you can see that in the entire first half of the capture/recording there were only a couple of frames that made it to even the 50% quality wrt the reference frame...hence my earlier appraisal. The green graph is the graded plot that shows how the quality progressed "downhill" from the best to the worst frame - you can see that it tailed off very quickly, another reason why I said the seeing would've been poor. wink.gif

 

Kev Post #10

 

<"Grant, you say

"You told AutoStakkert to use APs when determining the 'best frames' - it would be helpful (as Borodog asserts) to see a sample of the AP size/distribution you used for this."

??? Have I been doing this wrong for years? I just drop AVIs into AS!3 and hit ANALYSE. Then I set APs.
Maybe I'm just overtired.....
">

 

I'm certainly over-tired atm Kev lol.gif but your approach is quite fine - no MAPs boxes need to be set to let AS!3 "do its thing." This fact questions some of the rather curious explanations put forward as to how AS!3 actually works - Emil keeps all his coding very secretive which is his right but there appears to be some "smoke & mirrors" in how many folks might generally believe it to operate...with good reason due to some of the rather vague explanations proffered on the internet! grin.gif

 

Regardless, do a test as I have on numerous occasions over the years: it is a laborious & time-consuming exercise but if you don't select a regular-sized stack but go for a hundred or two at the most you can "Analyse" without the MAPs boxes & also with them - in the same recording you will get exactly the same specific frames (AS!3 lists each frame as of where it was in the original recording) ordered in the same sequence/progression. wink.gif

 

Nonetheless, AS!3 remains the best stacking program with the simplest/easiest interface imo, although I have not looked at the French program as of yet... waytogo.gif

 

 


  • kevinbreen and Kiwi Paul like this

#17 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 28 November 2020 - 02:55 PM

I don't know that that will be a fair test, as your indoor image will not be boiling from atmospheric turbulence. I think the only way to be sure is to actually test it on Mars on a night with good seeing. That's what I'm planning to do.

Borodog, far point. I was thinking of simulating 'seeing' by imaging thru a rotating piece of cellophane with a smudge of Vaseline. If this were to rotate at 10 rpm, a 1 minute long exposure would provide good evidence of AS3's robustness of Quality assessment as a function of SNR by looking for the 6 second periodicity of 'Vaseline'. Not only will the indoor test afford me some control of the 'experiment', but my brain won't become befuddled due to the cold;)



#18 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 28 November 2020 - 03:10 PM

In your Post #7 the gray saw-tooth plot is how AS!3 has determined the quality of each individual frame of the entire capture from go to whoa, against that of the reference frame it chooses - you can see that in the entire first half of the capture/recording there were only a couple of frames that made it to even the 50% quality wrt the reference frame...hence my earlier appraisal. The green graph is the graded plot that shows how the quality progressed "downhill" from the best to the worst frame - you can see that it tailed off very quickly, another reason why I said the seeing would've been poor. 

That's a very interesting observation:  I guess I was anticipating that for 'average' or 'fair' seeing, I would have a 'normal distribution' of Quality of better and worse frames over the course of the recording leading to the 'green S-curve'. Again, I was of the impression that the 'quality factor' is in relation to AS3's pick of the reference frame (however that happens). But from what you're saying, the Quality curve shape needn't be an 'S-curve at all, eg. if seeing were 'perfect', the Quality curve would be a straight line at 100% across the board.

 

Do me (and all newbie's like me) a big favor:  would you post a picture of one of your Quality Curves resulting from exceptional seeing?

 

Indebted to your sound advise!



#19 Kokatha man

Kokatha man

    Hubble

  • *****
  • Posts: 15,082
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 28 November 2020 - 06:39 PM

Well Bob, contrary to a lot of comments our images receive, even taking the flattery out of the occasion lol.gif we rarely experience "exceptional seeing."

 

I don't make too big an issue out of it but from our perspective most of the seeing falls into what I term "decent" tbh - there are a very few times when we have had what I term "excellent" seeing so I guess "decent" can be anywhere between "ok" & "pretty good" - not that this say terribly much to anyone else...everything is relative! wink.gif

 

A couple of those "excellent" situations stick in my memory...Saturn many years ago...& going back a few years, Neptune: the seeing with Neptune was so good that we believed we could see the Equatorial storm in the live feed. (see my signature below & the BAA eBulletin mentioned)

 

Also, when capturing an r-g-b sequence we could clearly differentiate the focus position between the r&g & that of the b filter - & the final colour image outcome revealed said storm as well as in the ir images - with Neptune that really is saying something! shocked.gif

 

A long time ago one quality graph was almost a straight line running at around 98% of the best frame quality, but it was using R6 & it might be a different story if it was run through AS!3...keep in mind that flat graphs can also occur when the reference frame & most of the other frames are similarly gauged - regardless of whether the Numero Uno is good or bad! lol.gif

 

Looking up those old examples I've mentioned here is not easy btw...they would be on any of the stacks of external HD's (labelled) sitting on the top shelves of my "archive."

 

Much easier to show you a "decent" although not the high-end of such a rating's seeing, this AS!3 quality graph from 9th October at Bower in the Murray Mallee.

 

A good example for a few reasons: the image outcome was quite reasonable...it is a single r-g-b sequence & not a WinJUPOS integration of multiple captures...& it shows the type of graph that most of our acceptable results display.

 

This is the red filter plot, excuse the size of the AS!3 screenshot wrt the actual images of Mars...I had to make it thus to fit both images into the CN limits.

 

Remember also that this single capture did not represent that night's effort to get any result at all - this was a single 300 second set of r-g-b captures that was the only decent one in a session that went from 1/4 past midnight until 1:50am! So even "decent" is not something that is easily-won!

 

Click on the thumbnails:

 

QualityGraph.jpg

 

mars2020-10-09_16-12_rgb_dpm-SouthUp.png  

 

 

 

 

 

 


  • Bob R. likes this

#20 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 30 November 2020 - 09:34 PM

 

A long time ago one quality graph was almost a straight line running at around 98% of the best frame quality, but it was using R6 & it might be a different story if it was run through AS!3...keep in mind that flat graphs can also occur when the reference frame & most of the other frames are similarly gauged - regardless of whether the Numero Uno is good or bad! lol.gif

Kokatha man, thank you so much. This is extremely helpful!!! What I'm seeing in the Quality Graph (the 'grey') is 'steady' seeing as the mean doesn't vary much from 50%. This furthers my sense that the assessment of 'quality value' is not absolute but rather a calculation based on the distribution of quality of all frames of the data set. That the 'green' curve is a smooth 's' suggests that the 'seeing' of all frames was a 'normal' or Poisson distribution of 'quality', i.e. +/- 1 standard deviation comprise 68% of the data set; +/1 2 std dev is 95%' +/- 3 std dev is 99%. Or taking the best frames of the 2 & 3 standard deviations, would yield 15.7% of the 39,000 frame data set or 6000 decent frames to stack!!!

 

My take aways:

 

(1) seeing seeing seeing

(2) make sure collimation isn't just concentricity of the defocused 'doughnut' but also verified by looking at symmetry of diffraction of focused star

(3) focus on nearby star lock primary mirror and move to planet

(4) balance exposure with gain so as to ensure good SNR yet achieve a minimum of 1000 frames to stack without planet rotation blur

(5) patience

 

Many thanks!



#21 Kokatha man

Kokatha man

    Hubble

  • *****
  • Posts: 15,082
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 01 December 2020 - 01:22 AM

Bob, some of that list can be rephrased as:

 

Seeing, seeing, seeing - collimation, collimation, collimation - focus, focus, focus. lol.gif

 

Whilst the diffraction rings either side of focus will tell you more about your optics, for the purposes of this exercise using either side of focus is sufficient. My "real view" page in my website tutorials gives a bit more info but once you have established good collimation with the star defocused to around a half-dozen rings, focusing until there are only 1 or 2 rings will give you an even better still appraisal of collimation. (imo an Airy Disk is very difficult to achieve in an SCT in-camera...which is of course what I refer to when collimating)

 

Focus always on the planet - theory might make people dispute the notion that you won't get as good a focus that way in comparison to focusing on features on the planet disk itself, but theory proves to be a poor master in this respect! wink.gif

 

Sufficient gain for high frame-rates is also a non-issue - Mars would require only about 50% gain to achieve an uber-high fps...around 300-400fps in your 10" SCT...roughly that is, but 50% gain is "peanuts" tbh.

 

Balancing gain with exposure is not really a problem with larger scopes & with 5 to 6 minutes capture times for Mars you'll get all the frames you require if the seeing cooperates. (the above image & all those on our website between September & late October used 5 minutes - longer before & later as Mars shrinks)

 

Patience is an absolute must - patience to put up with all the frustrating times when weather refuses to cooperate after you feel you have the hang of things..! lol.gif

 

 


  • Bob R. likes this

#22 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 01 December 2020 - 10:39 PM

Seeing, seeing, seeing - collimation, collimation, collimation - focus, focus, focus. lol.gif

 

Patience is an absolute must - patience to put up with all the frustrating times when weather refuses to cooperate after you feel you have the hang of things..! lol.gif

Thank you for the encouragement. Tomorrow night looks like the clouds will push out, though the forecast for seeing suggests Sunday will present a better window of opportunity in the jet stream after Friday's snow. I think tomorrow I will work on honing collimation and focus in spite of the spotty seeing to prepare.

 

All these late nights and mysterious credit card expenditures, my wife is beginning to suspect I have a mistress in the backyard;) 


  • Borodog likes this

#23 Borodog

Borodog

    Apollo

  • -----
  • Posts: 1,016
  • Joined: 26 Oct 2020

Posted 02 December 2020 - 12:58 PM

One trick that I've discovered is that if, as you are checking collimation on a star, you can see plumes of heat rising in the unfocused donut image of the star, your scope is not down to temperature and your collimation will likely appear out of whack. Let it get down to temperature, when the plumes are gone, before making any adjustments.



#24 Bob R.

Bob R.

    Vostok 1

  • -----
  • topic starter
  • Posts: 172
  • Joined: 27 Jul 2004
  • Loc: Connecticut

Posted 06 December 2020 - 08:35 PM

... you can see plumes of heat rising in the unfocused donut image of the star, your scope is not down to temperature and your collimation will likely appear out of whack....

Though I'm not sure my collimation error was caused by tube thermals, in the first picture (unaided collimated), I think I can see some minor tube thermal. Also note, that what I thought was 'good' collimation, upon integrating 800 frames to null out the effects of seeing, one can see the collimation is a tad off to the right. For an F10 SCT, this appears to correlate to a 3 arc minute tilt in the optical train. According to "Star Testing Astronomical Telescopes" by Harold Suiter, this minor tilt results in losing 20% of the image contrast at the mid-range of the Fraction of Maximum Spatial Frequency in the modulation transfer function. I suspect, that at f20 where I image Mars, the 3 arc minute tilt has even a greater impact but I don't have an MTF for that f#.

 

Clearly I needed to up my collimation skills and downloaded Metaguide to bring a little more quantitative assistance to this critical element of imaging. Metaguide clearly showed the airy disk was indeed skewed. Correction took about an 1/16th of a turn on the secondary adjustment screw. As I believe the Meade LX200 10" secondary to rely on 32 TPI screws on a 1.84" equilateral triangle, 1/16th of a turn is equivalent to a 4.2 arc minute tilt. The second image is the result of the Metaguide 'tweek' to collimation ... looks descent.

 

Now what I need is a night where seeing (according to Metaguide) is less than 4-5 arc seconds;) Monday night looks like I will have a window of opportunity for 3 hours;) Fingers crossed!!!

Attached Thumbnails

  • Unaided Collimation lores.jpg
  • Metaguide Collimated lores.jpg


#25 Borodog

Borodog

    Apollo

  • -----
  • Posts: 1,016
  • Joined: 26 Oct 2020

Posted 06 December 2020 - 09:12 PM

Well done. I need to download Metaguide myself . . . Too many new things to learn!




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics