Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Derotation of video stream vs images

  • Please log in to reply
9 replies to this topic

#1 chongo228

chongo228

    Viking 1

  • *****
  • topic starter
  • Posts: 522
  • Joined: 22 Jan 2017
  • Loc: Raleigh, NC

Posted 15 May 2019 - 11:36 AM

I've only experimented with derotation of still images in Winjupos. I was reading online you can derotate the raw .SER or .AVI video stream. Has anyone tried both methods? Does one method have a clear advantage? 

 

The video method outputs a "derotated video" that would then need stacking and post processing.

 

Would the following have the same outcome?

 

1) Take a nine minute video, derotate the video, stack the video and process as normal

 

or 

 

2) Take three individual videos that are three minutes long, stack them, derotate the three images, stack the three output images together and move on to post processing. 

 

 

One of the threads talking about video derotation....

https://www.cloudyni...os-in-winjupos/

 

 

 



#2 ToxMan

ToxMan

    Skylab

  • *****
  • Posts: 4,226
  • Joined: 23 Jan 2011
  • Loc: Tucson, Arizona, USA

Posted 15 May 2019 - 11:44 AM

astrovienna has the most experience, I know of, using WinJUPOS derotation of video



#3 t_image

t_image

    Gemini

  • -----
  • Posts: 3,415
  • Joined: 22 Jul 2015

Posted 15 May 2019 - 01:02 PM

Without having worked with Winjupos (but familiar with it),

let me add from my experience with complex video editing...

Obviously with Jupiter rotation is in reference to the planet's rotation that changes during video capture.

But how much that change is apparent has to do with the spatial resolution each pixel is representing in your process,

as smaller sensor pixels and a larger FL will betray more movement earlier.

 

I would say depending on the algorithm of de-rotation, you will get better results by de-rotating a video of any smaller length,

before stacking,

as the process could also be though of as aligning each frame of the video together given the distortion that happens with planetary rotation,

so you can then effectively stack the aligned frames after,

rather than stacking non-aligned frames and then trying to de-rotate after the fact,

because stacking before de-rotation, depending on spatial resolution,

will add distortion that the de-rotation procedure exists to reduce.

 

However it can be easily considered that there is a length by which a video can be too long, or at least it needs to be separated into different image stacks...

For example if you take a video so long that the GRS moves across too much, the de-rotation procedure will not only be challenged to align the data from the frames,

but it will have to approximate data not collected.....

So I'm sure there is a sweet-spot of most usefulness, but again I imagine it would vary depending on spatial resolution of systems.....



#4 Kokatha man

Kokatha man

    Hubble

  • *****
  • Posts: 15,176
  • Joined: 13 Sep 2009
  • Loc: "cooker-ta man" downunda...

Posted 15 May 2019 - 08:12 PM

"t image" - allow me to observe that you're living up to your moniker of a contrarian to the hilt...arguing with your own conclusions! rofl2.gif

 

Personally I've never found any difference between stacking "regular" timespan videos ("derotation of images" or "derotation of r/g/b images") or the derotation of videos & (what I see personally) as all the extra work/time of video derotation.

 

"Your mileage may vary" of course as they say but looking back at Kevin's (Astrovienna) old thread, apart from Ferret slipping in one of his old Saturns for the 11 millionth time at the end of that old thread (I have issued him with a severe warning about doing that anywhere again incidentally! rofl2.gif ) the other salient aspect is one that Mike Phillips raised in that discussion...

 

Seeing is so dynamic a factor as anyone who has done any planetary imaging will know: even over a 2 or 3 hour session with very good seeing it is often the case that a single of very small group of captures still stands out from the rest!

 

What this means is that regardless of mono or colour camera captures there is a finite time to grab optimum data - capturing longer than (say) 3 x 60" r-g-b takes or 1 x 180" osc takes runs the risk of losing possible short periods of "best-seeing" in any single session...in my well-distributed tome lol.gif on WinJUPOS I do state that the derotation of image stacks approach can employ 3 x 180" each of r, g & b captures for a mono camera besides the stacking of multiple 180" totals of r-g-b takes - but these days I would never recommend that for the same reason I'd dissuade folks from derotating longer video streams - spending time on "longer-than-usual" captures suffers innately from the seeing aspects I mention where should the seeing drop significantly duriing these elongated captures, you might be "caught holding the baby!"

l

No offence to babies btw! smile.gif

 

Hence my firm belief that video derotation offers nothing except more work - but as said "your mileage may vary!" wink.gif

 


  • RedLionNJ and t_image like this

#5 ToxMan

ToxMan

    Skylab

  • *****
  • Posts: 4,226
  • Joined: 23 Jan 2011
  • Loc: Tucson, Arizona, USA

Posted 15 May 2019 - 09:02 PM


 

"Hence my firm belief that video derotation offers nothing except more work - but as said "your mileage may vary!" wink.gif

 

basically, I think that was astrovienna's conclusion, too


  • t_image likes this

#6 yock1960

yock1960

    Fly Me to the Moon

  • *****
  • Posts: 5,065
  • Joined: 22 Jun 2008
  • Loc: (Crossroad of clouds) Ohio, USA

Posted 16 May 2019 - 05:53 AM

I will very often derotate and stack a number of images, as it is a relatively quick, painless task. Video derotation on the other hand is neither.

 

Steve


  • RedLionNJ likes this

#7 chongo228

chongo228

    Viking 1

  • *****
  • topic starter
  • Posts: 522
  • Joined: 22 Jan 2017
  • Loc: Raleigh, NC

Posted 16 May 2019 - 06:56 AM

Thanks guys.....that's what I was looking for.

 

I'm going to shoot with a 12" SCT, ASI174mc, and 2x powermate.  I'm planning to capture three or four 180 second .SER files, stack them, derotate/combine those images into one final image and post process that final image. 

 

Is nine to twelve minutes total capture enough? Should I go for more time? Any other bits of advice would be appreciated. 



#8 RedLionNJ

RedLionNJ

    Skylab

  • *****
  • Posts: 4,247
  • Joined: 29 Dec 2009
  • Loc: Red Lion, NJ, USA

Posted 17 May 2019 - 08:30 AM

Thanks guys.....that's what I was looking for.

 

I'm going to shoot with a 12" SCT, ASI174mc, and 2x powermate.  I'm planning to capture three or four 180 second .SER files, stack them, derotate/combine those images into one final image and post process that final image. 

 

Is nine to twelve minutes total capture enough? Should I go for more time? Any other bits of advice would be appreciated. 

This is a good strategy, well-suited to the equipment in use. All you can do is hope the seeing lives up to the instrumentation.

 

As some have demonstrated (notably Christophe), AS!2 or AS!3 does a great job of aligning data over a 2 or 3 minute period. However, it does not necessarily align the features to the mid-point of the capture. The consequence of this is that subsequent stacking of three AS!3 outputs may not be easily stackable in Winjupos - the alignments may be "off" based purely on the timestamp aspect of the filenames.

 

A "resetting" of the stacked filename to match a "best guess" (say, 20% of the way through the capture timeframe) would be a nice feature for AS!3, in my opinion. However, since the frames get quality-ordered, the meaningfulness of such a time would be lost.

 

If you're not following this, it's likely my inability to explain my thoughts :(

 

Grant



#9 CrazyPanda

CrazyPanda

    Mercury-Atlas

  • *****
  • Posts: 2,825
  • Joined: 30 Sep 2012

Posted 17 May 2019 - 10:39 PM

I've found you can extract more usable data if you do derotation of the video stream in WinJupos and let it split the data into three separate color channels.

 

While I don't have a side-by-side comparison, a derotation of a 3 minute video of Jupiter at a 4000mm FL and 3.75 micron pixel sensor does improve sharpness of features. The three color channels that are produced can be processed independently in AutoStakkert, and combined in photoshop. This approach also seems to reduce noise slightly, allowing for slightly more aggressive wavelet or deconvolution settings.

 

I've also found a quirk with AutoStakkert where varying the size of alignment points actually produces a slightly different noise pattern. Depending on the scale of your image, you can process the same video stream 6-7 times with different sized alignment points. The level of detail will be the same, but the grain will vary. Varying grain is good, because you can go into Photoshop and used Median Stack Mode to average out that grain, smoothing out the image.

 

So my workflow for Jupiter is typically this:

 

* 3 x 180s videos

* Align and crop tightly in PIPP to speed up further processing

* Derotate each in WinJupos, creating separate RGB color channels for each stream (9 streams total)

* Process each stream a total of 6-7 times in AutoStakkert, varying the size of the APs. Produces anywhere from 54 to 63 stills.

* For each color from each stream, put all of the 6-7 stills into layers in a single image, and then use Median Stack Mode to average out the noise. Now back down to 9 stills.

* For each set of colors from a given stream, use Photoshop to recombine into RGB. Now down to 3 stills (one for each stream).

* Go back into WinJupos and combine/derotate the three stills to further reduce noise.

* Do some wavelet and deconv in AstraImage (which I've found to work MUCH better than wavelets in Registax for my data).

* Some final tweaks to color, noise reduction, and unsharp masking in Photoshop.

 

It's a LOT of processing, and probably unnecessary to go to such lengths if you live under better air, but this is the only way I even remotely extract decent data out of the short exposures and high gain I need to compensate for my bad air.

 

Example of this process:

 

Jupiter_2018_May_31_2.png

 

That was pulled out of maybe Pickering 5-6 seeing using an 8" LX90 and ASI224MC.

 

Without doing the RGB split from video de-rotation in WinJupos, and without doing several passes in AutoStakkert to get randomized noise per stream, the noise that would be present in this image would be substantially worse than it is.


Edited by CrazyPanda, 17 May 2019 - 10:46 PM.


#10 chongo228

chongo228

    Viking 1

  • *****
  • topic starter
  • Posts: 522
  • Joined: 22 Jan 2017
  • Loc: Raleigh, NC

Posted 18 May 2019 - 12:01 PM

 

 

So my workflow for Jupiter is typically this:

 

* 3 x 180s videos

* Align and crop tightly in PIPP to speed up further processing

* Derotate each in WinJupos, creating separate RGB color channels for each stream (9 streams total)

* Process each stream a total of 6-7 times in AutoStakkert, varying the size of the APs. Produces anywhere from 54 to 63 stills.

* For each color from each stream, put all of the 6-7 stills into layers in a single image, and then use Median Stack Mode to average out the noise. Now back down to 9 stills.

* For each set of colors from a given stream, use Photoshop to recombine into RGB. Now down to 3 stills (one for each stream).

* Go back into WinJupos and combine/derotate the three stills to further reduce noise.

* Do some wavelet and deconv in AstraImage (which I've found to work MUCH better than wavelets in Registax for my data).

* Some final tweaks to color, noise reduction, and unsharp masking in Photoshop.

 

 

So you derotate the video and then come back a second time and derotate the stacked images? I never thought about doing it that way.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics