Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Mars variable duration stacking experiments up to 10 minutes.

  • Please log in to reply
22 replies to this topic

#1 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 12:00 AM

There was a recent thread with some debate regarding how long you could stack images in Autostakkert!3 without rotating before it becomes disadvantageous. The other night I had average seeing but it remained consistent for a very long period so one of the 10 minute captures I decided to do variable duration stacking to see where it stopped improving. In essence, this is going to be wildly variable depending on how far away Mars is at the time, and your focal setup so it pretty much will only apply for that night with my equipment in my setup. That said, this was on the 19th of October with an estimated diameter of 21.84", on a CPC1100 with a 2x powermate onto a ZWO ASI 224 MC.

 

These are the video reported statistics:

Camera=ZWO ASI224MC
Filter=L
Profile=Mars
Diameter=21.84"
Magnitude=-2.51
CM=338.8°  (during mid of capture)
FocalLength=5650mm
Resolution=0.14"

What follows is a stack of 1,2,3 etc up to 10 minutes duration with exactly the same post-processing settings for comparison. There almost certainly is someone out there who could make these look better, but that's not the point of this exercise; it's simply to see what gave me the best quality. When I flick through these images directly, there is visible rotation when moving from 1 to 2 minutes, along with less noise, and then after 2 minutes there is no visible rotation, but progressively more blur. I took this further and then tried to test where image quality deteriorated and it improved up to 95 seconds (not shown here.) Unless you scroll through them in the gallery in exactly the same position it is very difficult to make a meaningful comparison.

 

Here's a link to just this gallery: https://www.cloudyni...ng-experiments/

 

60s

2020 10 19 1424 3 L Mars 060s limit000000 002100 L6 ap72g
 
120s
2020 10 19 1424 3 L Mars 120s limit000000 004200 L6 ap75g
 
180s
2020 10 19 1424 3 L Mars 180s limit000000 006300 L6 ap75g
 
240s
2020 10 19 1424 3 L Mars 240s limit000000 008400 L6 ap75g
 
300s
2020 10 19 1424 3 L Mars 300s limit000000 010500 L6 ap75g
 
360s
2020 10 19 1424 3 L Mars 360s limit000000 012600 L6 ap75g
 
420s
2020 10 19 1424 3 L Mars 420s limit000000 014700 L6 ap72g
 
480s
2020 10 19 1424 3 L Mars 480s limit000000 016800 L6 ap72g

 

540s

2020 10 19 1424 3 L Mars 600s L6 ap72g

 

600s

2020 10 19 1424 3 L Mars 540s limit000000 018900 L6 ap72g


  • Magellanico, Deven Matlick, Kenny V. and 7 others like this

#2 speedster

speedster

    Astronomy Architecture and Engineering at McCathren Architects

  • *****
  • Vendors
  • Posts: 549
  • Joined: 13 Aug 2018
  • Loc: Abilene, Texas

Posted 21 October 2020 - 12:14 AM

Howdy Ittaku!

 

I think the post you are referring to might have been mine.  The difference is that I was changing the number of frames stacked based upon the best frames rather than time.  Such as, the 100 best frames vs the 500 best frames vs the 1000 best frames stacks.  Those "best" shots, per AS3!, could have happened anytime in the 10 minute video.  So, stacking the first minute could be stacking a whole lot of the worst frames if seeing got better as the video progressed.  If you run out of something to do, please run stacks of your data using different numbers of "best frames".  I'd like to see if your results mirror mine.  My best stack was only the best 100 images out of 10,000.



#3 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 12:15 AM

Howdy Ittaku!

 

I think the post you are referring to might have been mine.  The difference is that I was changing the number of frames stacked based upon the best frames rather than time.  Such as, the 100 best frames vs the 500 best frames vs the 1000 best frames stacks.  Those "best" shots, per AS3!, could have happened anytime in the 10 minute video.  So, stacking the first minute could be stacking a whole lot of the worst frames if seeing got better as the video progressed.  If you run out of something to do, please run stacks of your data using different numbers of "best frames".  I'd like to see if your results mirror mine.  My best stack was only the best 100 images out of 10,000.

Fair enough! I tried to keep the percentage stack the same instead, so I'm obviously comparing something different to yours since more minutes means more total frames.



#4 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 12:16 AM

For completeness, here's the 95 second stack.

 

2020 10 19 1424 3 L Mars limit000000 003325 L6 ap72g

Edited by Ittaku, 21 October 2020 - 12:26 AM.

  • Deven Matlick likes this

#5 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 21 October 2020 - 01:43 AM

Your experiment here is testing the variability and consistency in the seeing across the 10 minute span, rather than anything about rotation.  Rotation itself is easy to detect, even in individual frames without stacking, comparing the first and last frames across even a short duration capture.  However, the alignment points are evaluated independently and undergo a warping process to match them to a reference frame during stacking.  In practice, this negates much of the effect of rotation within a given stack, but rotation is nevertheless readily visible when comparing different stacks.  I also don't notice much difference in quality between the outcomes here.  Your "optimal" stack of 95s has arguably more detail than your 10m stack (very small differences), but it also has more noise, and so the differences are small.  I would actually say that your 6m stack was superior to your 95s stack, with the same level of real detail, but less noise.  Incidentally, many people routinely image Mars at 5-6 minute intervals with color cameras.  


  • Foc and Ittaku like this

#6 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 01:50 AM

Your experiment here is testing the variability and consistency in the seeing across the 10 minute span, rather than anything about rotation.  Rotation itself is easy to detect, even in individual frames without stacking, comparing the first and last frames across even a short duration capture.  However, the alignment points are evaluated independently and undergo a warping process to match them to a reference frame during stacking.  In practice, this negates much of the effect of rotation within a given stack, but rotation is nevertheless readily visible when comparing different stacks.  I also don't notice much difference in quality between the outcomes here.  Your "optimal" stack of 95s has arguably more detail than your 10m stack (very small differences), but it also has more noise, and so the differences are small.  I would actually say that your 6m stack was superior to your 95s stack, with the same level of real detail, but less noise.  Incidentally, many people routinely image Mars at 5-6 minute intervals with color cameras.  

I understand that, which is why I pointed out seeing was almost constant... the quality graphs were almost unchanged over the 10 minutes. What I was comparing most of all was detail only and seeing when detail started to fall in favour of less noise. I also acknowledge that "better" is a matter of taste, which is probably why I chose the one with most detail.



#7 rkinnett

rkinnett

    Viking 1

  • *****
  • Posts: 653
  • Joined: 08 Aug 2018
  • Loc: La Crescenta, CA

Posted 21 October 2020 - 02:04 AM

Thanks for putting this together.  Nice photos by the way!  I like the idea of chopping up one long duration recording so you're not comparing from varying points in time over the course of an hour, although you still have to chop it up arbitrarily.

 

Just curious, what did your histogram look like?  What percentage did you stack?  How many total frames were in the full 10 min recording?

 

The whole idea behind shooting well beyond the 1x rotation blur limit (typ. 3-4 mins) is that higher frame counts allow you be more selective in your processing.  For most of us, seeing, collimation, focus, etc keeps us from really capturing 1px-scale detail anyway, so 1-2px of rotation blur is indiscernible and it's worthwhile to accept that rotation blur for the sake of capturing more frames.  But you don't necessarily want to waste that frame count advantage by stacking more frames than you need.  Let's say you capture capture 30k frames in 3 mins, 40k in 4, 50k in 5, etc.  Stacking 5% of each of these uses 1500 frames, 2000, and 2500, respectively.  You're not going to gain detail by stacking 2500 frames vs 1500 because you're already well over the number you need to minimize noise and represent detail stochastically.  What I'm trying to say is that you're not making the most of the the frame count advantage of the longer duration sets by using constant percentage in this comparison.  

 

How many total frames were stacked in your 95 sec stack?  Consider stacking each of your recordings with that same frame count, rather than constant percentage.  Let's say that number is 500 frames.  The whole idea is that the best 500 frames across a 600 sec period should be substantially higher quality than the best 500 frames in a 95 sec period.  Does that make sense?

 

Setting that aside and just considering the results you've shown so far, you noted that you can discern more blur in the longer duration recordings when you flick through them.  It's very subtle.. I can't see it when scrolling through this page, and I'll bet that it would be less discernible if you didn't have such good data to begin with.  For many of us who feel like we're slogging through the mud with bad suburban seeing (not to mention room for improvement in skills and equipment), that subtle rotation blur should not affect our choice in recording duration up to 5 mins or so.  This point is even more important for folks who shoot well under 100 fps, where that extra duration is even more important for getting your total frame counts up.

 

I'm shooting 4 min recordings right now as I type.  I was going to shoot 5 mins based on the same recent discussions you were referring to, but I shied away from that only because 4 mins yields smoother animations.

 

Thanks again for compiling this.  Would love to see the same comparison with constant stack frame count rather than percentage, if it's not too much hassle?  I know this was a lot work, and it's much appreciated!



#8 rkinnett

rkinnett

    Viking 1

  • *****
  • Posts: 653
  • Joined: 08 Aug 2018
  • Loc: La Crescenta, CA

Posted 21 October 2020 - 02:20 AM

However, the alignment points are evaluated independently and undergo a warping process to match them to a reference frame during stacking.

Hi Tom, like we discussed in the other thread, this is simply not true.  Autostakkert does not warp frames, at least not at the AP level.  It may or may not apply total-frame skews, but I highly doubt it.. it's almost certainly simple translation based on least-error fit across the alignment points.. a common technique.  Emil can confirm if he comes across this.  The alignment process may compensate ever so slightly for motion blur, given that it tries to align all AP points including those at the center, but if you're using equally spaced APs then you have more around the edges than you do at the center, and depending on the weighting scheme, those likely have more influence.  If you were to set alignment points just around the center but not the limb, then maybe the alignment process would compensate for rotation blur, sacrificing your edges. But for general usage with evenly spaced APs, Autostakkert does not compensate for rotation blur in any appreciable way.



#9 BrettD

BrettD

    Explorer 1

  • -----
  • Posts: 87
  • Joined: 14 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 02:21 AM

Fair enough! I tried to keep the percentage stack the same instead, so I'm obviously comparing something different to yours since more minutes means more total frames.

As Ryan was saying, by doing this it isn't a fair comparison ... as the extra frames in the longer videos also reduces noise.

 

If you are trying to isolate rotation, you would need to stack the same number of frames each time ... (and ensure the stacked frames are evenly spread throughout the capture.

 

Maybe this could be done with PIPP? such that each AS3 stack uses 100% of frames selected by PIPP evenly from several 1 min captures?



#10 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 02:23 AM

Thanks. I'll do a fixed number of frames comparison later as well.
  • rkinnett likes this

#11 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 21 October 2020 - 02:59 AM

Hi Tom, like we discussed in the other thread, this is simply not true.  Autostakkert does not warp frames, at least not at the AP level.  It may or may not apply total-frame skews, but I highly doubt it.. it's almost certainly simple translation based on least-error fit across the alignment points.. a common technique.  Emil can confirm if he comes across this.  The alignment process may compensate ever so slightly for motion blur, given that it tries to align all AP points including those at the center, but if you're using equally spaced APs then you have more around the edges than you do at the center, and depending on the weighting scheme, those likely have more influence.  If you were to set alignment points just around the center but not the limb, then maybe the alignment process would compensate for rotation blur, sacrificing your edges. But for general usage with evenly spaced APs, Autostakkert does not compensate for rotation blur in any appreciable way.

Ryan, I do hope Emil comes across this, because I think you might be incorrect, although maybe we both are.  If there wasn't warping that occurred on the individual AP level, then you wouldn't get any benefit compared to global alignment.  What purpose would the APs serve?  The quality scores are calculated for each AP, rather than across the entire frame.  As you may be aware, Rolf Hempel has developed a competitor freeware program for stacking, and has a very long thread about it here on CN.  His program uses the same basic mechanisms as Autostakkert, and one of the output files his program gives shows the actual distribution of pixel warp sizes across all AP.  It's literally in the name of the graph, as is described in his documentation.  Shown below is an example of this, taken from Rolf's program that I have used on the Moon.  Furthermore, Autostakkert has a step that is called "MAP recombination", which I interpreted as splicing the APs back together.  Additionally, there is empirical evidence that the AP alignment and stacking method does overcome many of the rotational problems that theory would predict.  So I do feel that the APs are treated individually, but I would welcome a completely explanation from someone "in the know".  

 

post-290416-0-89813400-1578085528.jpg


Edited by Tom Glenn, 21 October 2020 - 03:00 AM.


#12 rkinnett

rkinnett

    Viking 1

  • *****
  • Posts: 653
  • Joined: 08 Aug 2018
  • Loc: La Crescenta, CA

Posted 21 October 2020 - 03:59 AM

APs serve the purpose of weighting frame alignments on points of interest so an optimizer can find a transformation which minimizes alignment error across those specific points of interest, whereas Global alignment searches for a transformation which minimizes pixel differences across the whole frame (or some similar whole-frame evaluation of goodness of match).  The AP method weighting scheme effectively provides "awareness of detail", if you will, to the alignment optimizer.  There is clearly value in that even if the product of both alignment methods is a simple transformation.



#13 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 21 October 2020 - 04:11 AM

APs serve the purpose of weighting frame alignments on points of interest so an optimizer can find a transformation which minimizes alignment error across those specific points of interest, whereas Global alignment searches for a transformation which minimizes pixel differences across the whole frame (or some similar whole-frame evaluation of goodness of match).  The AP method weighting scheme effectively provides "awareness of detail", if you will, to the alignment optimizer.  There is clearly value in that even if the product of both alignment methods is a simple transformation.

So how do you explain the local warping that occurs indicated in the graph above?  I would recommend you read about Rolf's program, because it is open source.  Unfortunately, because Autostakkert is not open source, the code is not available, and so we don't really know exactly how it works.  Rolf's program, which produces very similar results and seems to operate under the same basic principles, appears to locally warp APs to conform to a reference frame, with the histogram distribution of warp shifts indicated in the graph I showed above.  Rolf's program is described in the post below, although the post has become massive.

 

https://www.cloudyni...ysystemstacker/



#14 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 04:16 AM

Thanks for putting this together.  Nice photos by the way!  I like the idea of chopping up one long duration recording so you're not comparing from varying points in time over the course of an hour, although you still have to chop it up arbitrarily.

 

Just curious, what did your histogram look like?  What percentage did you stack?  How many total frames were in the full 10 min recording?

 

The whole idea behind shooting well beyond the 1x rotation blur limit (typ. 3-4 mins) is that higher frame counts allow you be more selective in your processing.  For most of us, seeing, collimation, focus, etc keeps us from really capturing 1px-scale detail anyway, so 1-2px of rotation blur is indiscernible and it's worthwhile to accept that rotation blur for the sake of capturing more frames.  But you don't necessarily want to waste that frame count advantage by stacking more frames than you need.  Let's say you capture capture 30k frames in 3 mins, 40k in 4, 50k in 5, etc.  Stacking 5% of each of these uses 1500 frames, 2000, and 2500, respectively.  You're not going to gain detail by stacking 2500 frames vs 1500 because you're already well over the number you need to minimize noise and represent detail stochastically.  What I'm trying to say is that you're not making the most of the the frame count advantage of the longer duration sets by using constant percentage in this comparison.  

 

How many total frames were stacked in your 95 sec stack?  Consider stacking each of your recordings with that same frame count, rather than constant percentage.  Let's say that number is 500 frames.  The whole idea is that the best 500 frames across a 600 sec period should be substantially higher quality than the best 500 frames in a 95 sec period.  Does that make sense?

 

Setting that aside and just considering the results you've shown so far, you noted that you can discern more blur in the longer duration recordings when you flick through them.  It's very subtle.. I can't see it when scrolling through this page, and I'll bet that it would be less discernible if you didn't have such good data to begin with.  For many of us who feel like we're slogging through the mud with bad suburban seeing (not to mention room for improvement in skills and equipment), that subtle rotation blur should not affect our choice in recording duration up to 5 mins or so.  This point is even more important for folks who shoot well under 100 fps, where that extra duration is even more important for getting your total frame counts up.

 

I'm shooting 4 min recordings right now as I type.  I was going to shoot 5 mins based on the same recent discussions you were referring to, but I shied away from that only because 4 mins yields smoother animations.

 

Thanks again for compiling this.  Would love to see the same comparison with constant stack frame count rather than percentage, if it's not too much hassle?  I know this was a lot work, and it's much appreciated!

I live in a city with 5 million people spread out over one of the largest urban sprawls in the world with bortle 8 skies in all directions for at least half hour drive, and the most unstable weather almost anywhere on earth, so I know your pain. As for the stacks, I only used 35fps and stacked 18% which corresponded with the 50% line mark in quality in AS.

 

Constant frame number stacks coming up shortly.


  • rkinnett likes this

#15 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 21 October 2020 - 04:32 AM

Here is the constant frame comparison - 380 frames. I simplified the post-processing slightly to make this process faster for me so the results are only comparable to each other; not the first set. This time the comparison is quite different, with noise never really going down even when stacking the best frames from all 10 minutes. However it doesn't get significantly blurrier with successively more frames either. At about x 360,y 500 there are two almost mirror image "hooks" that only show up in their full resolution once we stack the best from 5 minutes and it's variable thereafter. At this point I'd call the best one the 5 minute stack, but it's noisier than than the previous stacks.

 

One to Ten minutes successively:

2020 10 19 1424 3 L Mars limit000000 002100 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 004200 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 006300 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 008400 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 010500 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 012600 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 014700 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 016800 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 018900 L6 ap75g
2020 10 19 1424 3 L Mars limit000000 021000 L6 ap75g

  • Foc likes this

#16 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 21 October 2020 - 01:02 PM

Hi Tom, like we discussed in the other thread, this is simply not true.  Autostakkert does not warp frames, at least not at the AP level.  It may or may not apply total-frame skews, but I highly doubt it.. it's almost certainly simple translation based on least-error fit across the alignment points.. a common technique.  Emil can confirm if he comes across this.  The alignment process may compensate ever so slightly for motion blur, given that it tries to align all AP points including those at the center, but if you're using equally spaced APs then you have more around the edges than you do at the center, and depending on the weighting scheme, those likely have more influence.  If you were to set alignment points just around the center but not the limb, then maybe the alignment process would compensate for rotation blur, sacrificing your edges. But for general usage with evenly spaced APs, Autostakkert does not compensate for rotation blur in any appreciable way.

Most experienced imagers here have known for quite some time that when using local AP alignment in AS!3, a different subset of frames is chosen to stack for each AP, and the final image is recombined.  This is why I have been confused by your posts that claim otherwise.  If you want to see a recent presentation by Emil about Autostakkert, see below.  In particular, starting at 17:45, he briefly describes local versus global alignment, and confirms what most of us have assumed, that the APs are evaluated independently, and that a different subset frames are used for each AP.  This necessitates that the APs are broken apart and pieced back together. 

 

https://www.youtube....eature=youtu.be

 

This dramatically impacts the ability to compensate for small amounts of rotation, which makes sense given the empirical evidence from many imagers, including those who produce very high resolution results, that demonstrate that you can greatly exceed the recording length at which rotation becomes visible.  The other interesting point that Emil makes is that if you have a recording with variable transparency, you can occasionally run into trouble with local AP stacking, and you get a patch work quilt, or "seam" artifacts.  This also confirms that the final image is composed of a mosaic of all the individual AP stacks.  This becomes very obvious on large lunar images, and in fact, I made a post in which I was forced to use global alignment and stacking on a lunar image because of changing transparency during the recording.  In the local AP stacked image, you can see many artifacts corresponding to the shape of individual APs, because they represented different subsets of frames taken with different sky brightness, and so cannot be recombined without artifacts.  

 

https://www.cloudyni...rocessing-tips/



#17 rkinnett

rkinnett

    Viking 1

  • *****
  • Posts: 653
  • Joined: 08 Aug 2018
  • Loc: La Crescenta, CA

Posted 21 October 2020 - 06:59 PM

Thanks Itakku for that comparison.  You do fantastic work!  Also thanks for the additional acquisition details. 

 

Stacking constant frame counts seems to have leveled the playing field and allowed the longer duration sets to exhibit their strength, subtle though it may be.  It's also fascinating that rotation blur is not pronounced.  That may have to do with frame selection, if the quality algorithm skews frame selection toward those nearest the reference frame.  I'm speculating which isn't particularly useful.  Bottom line is there does appear to be slight benefit to shooting longer than the conventional ~4mins when the higher total frame counts are leveraged by stacking more selectively (lower percent).

 

Thanks Tom for that link to Emil's interview, and to the thread on Rolf's PlanetarySystemStacker.  I'll check those out tonight.  Interesting point about AS using different frames for each AP.  I didn't know that, but I have not claimed anything to the contrary.  This description is very different from your earlier descriptions in which you suggested AS artificially moves pieces of each frame around individually and "stitches" them back together.  Perhaps I misread your earlier posts?  Your experience is very highly regarded, so please don't read this as a sleight:  the "empirical" evidence you have mentioned repeatedly without reference is subjective and anecdotal unless you're referring to systematic studies like what Ittaku provided here or consensus throughout this community.  Your experience indicates to you that longer captures are beneficial, and that's  a highly valuable data point.  I know that approach works for you because you make great images.  But there are many renowned imagers who advise not exceeding ~4 mins based on their own experience and systematic studies (refs:  Peach, Go, plus countless others in these forums and elsewhere).  It's not helpful to vaguely tout "empirical evidence" supporting your approach while disregarding their collective experiences.



#18 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 21 October 2020 - 07:37 PM

Thanks Tom for that link to Emil's interview, and to the thread on Rolf's PlanetarySystemStacker.  I'll check those out tonight.  Interesting point about AS using different frames for each AP.  I didn't know that, but I have not claimed anything to the contrary.  This description is very different from your earlier descriptions in which you suggested AS artificially moves pieces of each frame around individually and "stitches" them back together.  Perhaps I misread your earlier posts?  

Perhaps we are having a terminology difference here, and I am not a software developer.  But from reading Rolf's descriptions of his program, and inferring how AS!3 works based upon what Emil has stated, it appears that the following steps take place.  For each video file, a number of frames, usually corresponding to all frames above 50% quality score, are stacked and used to create a reference frame.  Each AP is then aligned independently to this reference frame using contrast features within the AP.  If an AP cannot be aligned because of unsuitable contrast features contained within, a larger AP covering the same area is used (if one was chosen initially) or the global frame is used for these "gap" regions.  But for all APs that are aligned, there is also a "dewarping" algorithm that is used, that causes pixel shifts to attempt an align each AP to the reference frame.  This is the local warping that is reported in Rolf's graph above.  This is also a detailed step in which I don't know the specifics, so perhaps if you are more familiar with this (or research the topic) you can report back on how this works.  But the individually aligned APs are then stacked, and used to recreate the final image, during MAP recombination.  Conceptually, I find this equivalent to "stitching" them together, and it is easy to find images with artifacts corresponding to the exact AP outlines, because each AP represents a stack of different frames, and so there is a possibility for artifacts in the final stack.  

 

So, from your previous posts, it did appear that you were disputing this was occurring, as you were suggesting that each frame is kept intact during stacking, with a simple translation applied, which does not appear to be the case.  Rolf also mentions that when a video file is stacked in AS!3 and his PSS program, using identical parameters, the final stacked images are not identical, but appear to represent two planetary images that have undergone a slight rotation.  He attributes this to a different subset of frames that were used to create the reference frame (possibly from different sections of the video), and further indicates that individual APs are mapped to this frame, and seem to undergo some time of local distortion or dewarping during alignment.  But again, I would defer here to the software developers to better explain what their programs are doing.  But if you play around with different AP sizes and create blinking animations of the results, you can see some surface features move significantly between the images, showing that the APs have individually been moved around during stacking.  This is especially true for large, high res, lunar images.  

 

 

 the "empirical" evidence you have mentioned repeatedly without reference is subjective and anecdotal unless you're referring to systematic studies like what Ittaku provided here or consensus throughout this community.  Your experience indicates to you that longer captures are beneficial, and that's  a highly valuable data point.  I know that approach works for you because you make great images.  But there are many renowned imagers who advise not exceeding ~4 mins based on their own experience and systematic studies (refs:  Peach, Go, plus countless others in these forums and elsewhere).  It's not helpful to vaguely tout "empirical evidence" supporting your approach while disregarding their collective experiences.

To be fair, almost ALL evidence provided by any amateur imager is anecdotal (even those from the experts). Further, there is widespread consensus throughout the community that you can record for longer than theoretical maximums (rotations itself can be detected in individual frames in just 30s-60s, easily).  Further still, your time value of ~4min you mention above is far greater than some of the claims that are parroted on this forum by folks that don't have any evidence whatsoever (some of those offering advice don't even image).  4 min for Mars is a reasonable limit, subject to wide variations due to the equipment used and the conditions, so there is really no disagreement there.  But there is a large difference between capturing with a C14 under good contains, and perhaps capturing with a C8 under mediocre conditions.  Nobody here would dispute the experience of seasoned imagers.  The reason many people get upset about some claims made here about spurious rotation effects, is that it encourages beginning imagers to take very short videos, which virtually guarantees suboptimal results, and they get off to the wrong start in this hobby, and possibly discouraged.  We see this countless times.  So, the issue here is more about setting the record straight for true beginners, and not so much the minor squabbling that occurs sometimes between more experienced imagers.  



#19 rkinnett

rkinnett

    Viking 1

  • *****
  • Posts: 653
  • Joined: 08 Aug 2018
  • Loc: La Crescenta, CA

Posted 21 October 2020 - 08:39 PM

I suspect the "warping" you're referring to is a measurement of where a correlated featured was found relative to the center of an AP.  I don't believe it reflects an AP-scale repositioning of the feature separate from the rest of a frame.  Your comment about changing one AP yielding a different result is not evidence one way or another.  Whether the alignment algorithm operates on whole frames or by repositioning pieces of frames, changing any single AP will invariably yield different AP weighting, different frame selection, different alignment measurements within the altered AP, and different whole-frame least-error alignment solutions.  There's no mystery here.. different input yields different results.

 

All that aside, I fully agree with and appreciate your intent:  disarming a common misconception (particularly among beginners) that they must avoid rotation blur even at cost of frame counts.  This discussion and Ittaku's analysis has been highly informative and I hope someone reading this will be convinced that it's okay to shoot more than a few minutes.  :)



#20 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 21 October 2020 - 11:28 PM

I suspect the "warping" you're referring to is a measurement of where a correlated featured was found relative to the center of an AP.  I don't believe it reflects an AP-scale repositioning of the feature separate from the rest of a frame.  Your comment about changing one AP yielding a different result is not evidence one way or another.  Whether the alignment algorithm operates on whole frames or by repositioning pieces of frames, changing any single AP will invariably yield different AP weighting, different frame selection, different alignment measurements within the altered AP, and different whole-frame least-error alignment solutions.  There's no mystery here.. different input yields different results.

 

All that aside, I fully agree with and appreciate your intent:  disarming a common misconception (particularly among beginners) that they must avoid rotation blur even at cost of frame counts.  This discussion and Ittaku's analysis has been highly informative and I hope someone reading this will be convinced that it's okay to shoot more than a few minutes.  smile.gif

Ryan, I'm confused as to why you don't think the APs are stacked independently, especially given the direct quotes from Emil in the video.  But, Autostakkert is not open source.  However, Rolf Hempel's  PSS program is, and so this discussion has led me to look a bit more deeply into his documentation.  Rolf knows what he is talking about, and has done an excellent job not only with his software but also with documentation, which sounds like it was actually his main objective, providing an open source format that we can all learn from (and potentially contribute to).  

 

Please look at this document summarizing the algorithms that Rolf has used. 

 

https://github.com/R...thm_summary.pdf

 

The document is not very long, but I have called attention to a few relevant quotes below.  These are in chronological order, but have gaps between them, so definitely read the source document for full details.  

 

-from p. 12 "Ranking of Alignment Points":

...After all APs have been set, for each frame and each AP the image quality is computed, based on the alignment box around the AP. This is a very compute intensive operation...

 

...The qualities are stored for all frames in a list. The list is stored in the AP dictionary as “alignment_point[‘frame_qualities’]. A list of the best frame indices (up to the specified percentage of frames to be stacked) is computed and stored in the AP dictionary as “alignment_point[‘best_frame_indices’]. Note that these lists in general are different at different APs because of local seeing...

 

-from p. 13 "Frame Stacking":

...First, for every AP an array with the size of the AP patch is computed. It is filled with “weights” between 0 and 1. Weights are 0 outside the patch rim and increase linearly to 1 at the AP center [ , ]...

 

...The weights for all points within the AP patch are stored with the AP at “alignment_point['weights_yx']”. In both coordinate directions the weights ramp up linearly from a small value on the lower patch boundary to 1 at the patch center, and from there ramp down again to a small value on the upper patch boundary...

 

...If “number_stacking_holes” is zero, no background image is needed in stacking. In this case everything is set for stacking. If it is greater than zero, the stacked image contains holes, so it has to be blended with a background image. The background is computed as the average of the best frames. Only global shifts are applied, no warping. This image must be blended gradually with the stacked image...

 

...Next, the total shift at the AP is computed as the sum of the global frame shift and the local warp shift “[shift_y, shift_x]”. Using these shift values, function “remap_rigid” shifts the AP patch around the AP in the current frame and adds it to the AP’s stacking buffer...

 

-from p. 17 "Merging alignment patches":

...So far stacking was performed locally on the AP patches. Now those patches are blended into the global “stacked_image_buffer”. This is done by method “merge_alignment_point_buffers”. It is crucial at this step to avoid sharp transitions between patches. After all, they have been rigidly shifted, most likely using different shift values. Therefore, overlapping patches must be blended with each other. The difficulty is, however, that the program so far has no notion of AP neighborhood. This problem is solved by multiplying the AP patches with weight functions which smoothly go to zero on the patch rim...


Edited by Tom Glenn, 21 October 2020 - 11:44 PM.


#21 rkinnett

rkinnett

    Viking 1

  • *****
  • Posts: 653
  • Joined: 08 Aug 2018
  • Loc: La Crescenta, CA

Posted 22 October 2020 - 12:51 AM

Whoa.  Fascinating ohmy.png

 

You were right!  I sincerely apologize.  flowerred.gif

 

Some notes here confirm AS indeed follows a similar algorithm.

 

And now I think I concur with your suggestion that this piece-part alignment scheme compensates for rotation blur to an extent.



#22 Ittaku

Ittaku

    Viking 1

  • -----
  • topic starter
  • Posts: 654
  • Joined: 09 Aug 2020
  • Loc: Melbourne, Australia

Posted 22 October 2020 - 12:54 AM

So now that that's out of the way, what do the experienced imagers here say they think are reasonable upper limits for various objects/focal lengths?


  • rkinnett likes this

#23 Tom Glenn

Tom Glenn

    Gemini

  • -----
  • Posts: 3,151
  • Joined: 07 Feb 2018
  • Loc: San Diego, CA

Posted 22 October 2020 - 01:55 AM

Whoa.  Fascinating ohmy.png

 

You were right!  I sincerely apologize.  flowerred.gif

 

Some notes here confirm AS indeed follows a similar algorithm.

 

And now I think I concur with your suggestion that this piece-part alignment scheme compensates for rotation blur to an extent.

No need to apologize.  As I said, I'm not a software developer, and so I always need to double check sources to make sure I haven't said anything that incorrectly attributes functions to software.  I was always working under the assumption that the APs serve to break apart a larger frame into individual "sub-images", and that these are independently aligned, stacked, and merged for a final result.  In this way, the final stack is something of a mosaic that represents the "best of the best" for every region of the image, throughout the duration of the recording.  It does appear this is correct.  Further, I think this absolutely explains why you can get good results stacking videos that are longer than you might expect if all the frames were aligned and stacked globally.  On Jupiter, for example, if you take a 3 minute recording, and make a stack corresponding to the first 30s and the last 30s of the video, and then animate the result, you can easily see rotation.  So, you might expect that if you stack frames across the entire 3 minutes span, that you would see motion blur.  But, at most focal lengths, and most conditions, you don't (even including C14s in good conditions in many cases......e.g. Darryl frequently uses 3 minute captures on Jupiter).  This observation reflects the fact that the greatest apparent angular movement occurs near the central meridian, and the APs in this region can be aligned with reasonable accuracy, over a short duration video, somewhat negating the rotation (up to a point).  And the regions near the limb don't experience as much apparent angular shift, because of foreshortening.  So, ultimately, the final result is much better than you would theorize, and the longer recording allowed you to make use of more frames.  This represents a fundamental piece of information that is missing in many of the discussions that occur here, in which many people (some who don't even image) will make claims that you should never image Jupiter beyond 1 min, when we all know that is false.  This is what irritates many experienced imagers.  

 

 

So now that that's out of the way, what do the experienced imagers here say they think are reasonable upper limits for various objects/focal lengths?

Hard to say.  Best to experiment yourself, and to look around at what other people are doing.  When I first started imaging, I had come across much of the mythology, but one of the first people I saw repeatedly spelling this out for people here on CN was Darryl (Kokatha man).  At the time, he was advocating for 3 minutes on Jupiter, 6 minutes on Saturn, and 6 minutes on Mars (for OSC cameras), although he was also careful to say that for many people, you could even go longer.  Furthermore, he also said that these values were based entirely on his own image results (of which you can find many examples) and not based on any mathematical calculations.  So, this is what I started doing.  I've noticed that occasionally, he has shortened his video durations, although not by much.  Also, many other people image for even shorter, but take everything with a grain of salt.  The links provided earlier by Ryan include some decent ones, and some rather unhelpful ones.  The unhelpful ones include (unfortunately) most of the CN posts here, because those are populated with many responses that are made without any actual imaging evidence.  The Christopher Go values, I believe, can be relaxed somewhat.  Chris is an amazing imager, but you need to keep in mind that he is imaging in very different (and better) conditions than most.  That being said, his values were not too far off the norms that are commonly used.  But, as always, keep in mind that everything is just anecdotal.  What works for someone else may not work for you. 


  • rkinnett and Ittaku like this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics