Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

AI based wave front sensing and collimation

Collimation
  • Please log in to reply
610 replies to this topic

#26 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 08 April 2021 - 09:49 AM

I assume that this can also be used to make very precise tip-tilt adjustments for the popular CMOS cameras that have that capability.   Can I also check the collimation of my refractor?  Sorry if I missed that in the thread. 

 

jg

Yes, tilted sensor plane will results in astigmatism like defocused star shapes.



#27 rockstarbill

rockstarbill

    Voyager 1

  • *****
  • Posts: 11,480
  • Joined: 16 Jul 2013
  • Loc: United States

Posted 08 April 2021 - 01:37 PM

Any update on availability?

#28 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 08 April 2021 - 02:32 PM

Any update on availability?

Following our beta tests we got very interesting feedback for users to improve our software and experience. We are implementing most of them. We should be able to release the first version by the end of April.

For information here is the GUI for our Collimator tool, among others tools such as PSF 2D and 3D:

SKWCollimator.jpg
 


  • psandelle and rockstarbill like this

#29 rockstarbill

rockstarbill

    Voyager 1

  • *****
  • Posts: 11,480
  • Joined: 16 Jul 2013
  • Loc: United States

Posted 08 April 2021 - 11:55 PM

Following our beta tests we got very interesting feedback for users to improve our software and experience. We are implementing most of them. We should be able to release the first version by the end of April.

For information here is the GUI for our Collimator tool, among others tools such as PSF 2D and 3D:

attachicon.gifSKWCollimator.jpg
 

Looks good. I have a need for something like this soon. :) 



#30 xthestreams

xthestreams

    Messenger

  • -----
  • Posts: 439
  • Joined: 18 Feb 2020
  • Loc: Melbourne, Australia

Posted 23 April 2021 - 08:46 PM

It will be worth the wait, that’s all I can say. 



#31 Peteram

Peteram

    Mariner 2

  • *****
  • Posts: 255
  • Joined: 03 Sep 2016

Posted 22 May 2021 - 03:31 PM

Any updates on availability?



#32 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 24 May 2021 - 08:08 PM

Any updates on availability yet?

We expect to release the SKW in few weeks. The basic SKW capability will be also integrated on SKyGuide and SkyGuard.

This and the documentation update has taken a bit more time that anticipated.


  • xthestreams likes this

#33 Peteram

Peteram

    Mariner 2

  • *****
  • Posts: 255
  • Joined: 03 Sep 2016

Posted 25 May 2021 - 03:43 PM

Thanks!



#34 mkalika

mkalika

    Lift Off

  • -----
  • Posts: 17
  • Joined: 17 Oct 2013

Posted 31 May 2021 - 11:52 AM

We expect to release the SKW in few weeks. The basic SKW capability will be also integrated on SKyGuide and SkyGuard.
This and the documentation update has taken a bit more time that anticipated.


Looking forward to it! I will be happy to be your beta tester. I own GSO RC 8” scope which is 90%-95% collimated using Howie Glater, Tak collimation scope as well as DSI method using centered star. But something is still missing. I would love to try your new software ASAP 😀.

Will it come with instructions for what mirror should I adjust - secondary or primary? Also will it be able to recommend if I have tilt and in which direction?

Thank you!!!

#35 pathint

pathint

    Mariner 2

  • *****
  • Posts: 296
  • Joined: 13 Nov 2019

Posted 08 June 2021 - 11:09 PM

Great use of AI. Can your NN algorithm solve for WF errors across the whole imaging field reasonably fast? If yes, you can build PSF models across the whole field. After a wavelet deconvolution can you then use this AI method to digitally correct the aberrations in any image?


  • RossW likes this

#36 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 09 June 2021 - 08:49 AM

Looking forward to it! I will be happy to be your beta tester. I own GSO RC 8” scope which is 90%-95% collimated using Howie Glater, Tak collimation scope as well as DSI method using centered star. But something is still missing. I would love to try your new software ASAP .

Will it come with instructions for what mirror should I adjust - secondary or primary? Also will it be able to recommend if I have tilt and in which direction?

Thank you!!!

SKW collimator is a quantitative tool. It provides a feedback for collimation by the numbers. SKW features a score (ranging from 0 to 10) which is related to the scope optical performance under an user defined seeing. The idea being that you want to reach the level of collimation which fits you seeing conditions, essentially matching collimation with seeing.

Of course you could set the local seeing to any level you want, for instance FWHM_Seeing=0" (no seeing), but this has little value if your scope has a DL, say at 0.5" (first zero of the Airy disk), and your local seeing is 2". Yet SKW will let you chose whatever you like.

SKW also features a target with history (scatter plot) of the collimation score such you can watch your progress live.

 

SKW_collimation_History.jpg

 

Above an example, on the upper left corner is the SKW collimator target tool, the blue spots are the history of the collimation process (here a 8" SCT under 1.5" of seeing. Bottom left corner shows the DL scope PSF under FWHM=1.5" seeing (SR=100%).

The center two images show the initial collimation status with a score of 0.5, a lot of astigmatism and some coma. The left two images show the result after collimation with a score=10 for this seeing. Although the scope DL PSF is not perfect (SR=67%) the end result for this seeing is essentially undistinguishable with the same scope diffraction limited (DL).

From a practical stand point the collimation is good enough when using this cope under a 1.5" (FWHM) seeing. Trying to reach a SR>80% would be a waste of time unless one plans to bring the scope under a much better seeing or in space.

 

Below the defocused star after collimation (SR=67%) under 1.5" of seeing (30 seconds exposure).

 

SCTDefocused.jpg

 

One can see that the central obstruction (CO) shadow IS NOT centered, this is not a collimation issue but an actual mechanical offset of the secondary mirror mount and baffles, trying to center the central CO shadow would lead to coma and a degradation of the scope performance. That would add about 0.1 wave rms of horizontal coma and result to a SR=45%. Therefore when well collimimated it IS NORMAL and expected for the CO shadow to be offset for this scope.
This is a very common situation for most scopes I have seen. The classical qualitative star test (by human inspection) assumes (and aims at) a centered CO shadow and may lead to a sub optimal collimation. SKW accounts for such offset errors and uses actual scope optical performance for collimation.

For now SKW provides quantitative collimation feedback. Which means when using any collimation strategy one has a direct and constant information about the scope performance for driving the process, as discussed above.

 

In the future SKW will offer even more support by providing the direction of corrections.

We are working with people from ESO to implement the technology used in the VLT (and others) for active optic (this is different than AO, the goal of active optic is to keep the scope collimated in real time).

This technology can be used for any Cassegrain telescopes (including SCT and RCT).

Using the wavefronts (WF) from several stars in the field (at once) one can compute the tilt/tip and offset of the mirrors (the mechanical collimation errors) with sign and magnitude. From there SKW will be able to tell the user which mechanical corrections are needed to improve the collimation.

 

This is possible since SKW has the capability to compute WFs of many stars at once (from the same frame) in the field. It should be understood that WF information (aberration kinds, magnitudes and directions) is paramount and the key here, this is very different than computing the the star FWHM values across the field. The former provides the necessary information for collimation error calculations the latter does not. The FWHM value is a summary of the aberrations effects (the size of the blur) but it does not carry any information about the WF and therefore about the mirror misalignment itself.


Edited by Corsica, 09 June 2021 - 09:07 AM.

  • mkalika, risteon and severinbb like this

#37 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 09 June 2021 - 09:02 AM

Great use of AI. Can your NN algorithm solve for WF errors across the whole imaging field reasonably fast? If yes, you can build PSF models across the whole field. After a wavelet deconvolution can you then use this AI method to digitally correct the aberrations in any image?

Good point, this is in fact already a part of our pending patent about this technology.

 

The short answer is YES.
The NN computation time is around 0.1 second with a standard laptop for one star.

The longest computing time is related to downloading the frame, image pre-processing and star detection.

Those steps, depending of the camera, USB speed and PC, take few seconds (say 3 to 10 seconds depending of how many usable stars are in the field).
 


Edited by Corsica, 09 June 2021 - 09:11 AM.

  • pathint likes this

#38 pathint

pathint

    Mariner 2

  • *****
  • Posts: 296
  • Joined: 13 Nov 2019

Posted 11 June 2021 - 01:18 AM

Good point, this is in fact already a part of our pending patent about this technology.

 

The short answer is YES.
The NN computation time is around 0.1 second with a standard laptop for one star.

The longest computing time is related to downloading the frame, image pre-processing and star detection.

Those steps, depending of the camera, USB speed and PC, take few seconds (say 3 to 10 seconds depending of how many usable stars are in the field).
 

Very exciting! That could also be the end of many people's obsession with astrophotography here :D



#39 sixela

sixela

    James Webb Space Telescope

  • *****
  • Posts: 17,955
  • Joined: 23 Dec 2004
  • Loc: Boechout, Belgium

Posted 11 June 2021 - 05:14 AM

Of course you could set the local seeing to any level you want, for instance FWHM_Seeing=0" (no seeing), but this has little value if your scope has a DL, say at 0.5" (first zero of the Airy disk), and your local seeing is 2"

 

It depends -- not everyone is doing long exposure astrophotography. If you're imaging planets at 60fps, getting a perfectly round fuzzy blob for long exposures isn't "good enough".


Edited by sixela, 11 June 2021 - 05:14 AM.


#40 sixela

sixela

    James Webb Space Telescope

  • *****
  • Posts: 17,955
  • Joined: 23 Dec 2004
  • Loc: Boechout, Belgium

Posted 11 June 2021 - 05:17 AM

This is a very common situation for most scopes I have seen. The classical qualitative star test (by human inspection) assumes (and aims at) a centered CO shadow and may lead to a sub optimal collimation. SKW accounts for such offset errors and uses actual scope optical performance for collimation.

 

The classic qualitative test done wrongly, sure. But if you do it correctly you don't stop here, you come closer to focus and tweak until it's good there, possibly finishing off in focus if the seeing is good enough, looking at the appearance of the first diffraction ring. I didn't invent this, it's been done for ages, see e.g. 

http://www.astrophoto.fr/collim.html

 

But it requires a lot of experience (and luck). i can't do it well if I'm tired, to give just one example. And it's really hard to quantify anything you're doing and when you should stop, so a tool that works 100% of the time and gives you quantitative data is indeed invaluable.

 

But since the brain is definitely a neural network, should I be worried about infringing on your patent when I star test ? ;-)


Edited by sixela, 11 June 2021 - 05:21 AM.


#41 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 11 June 2021 - 06:52 AM

The classic qualitative test done wrongly, sure. But if you do it correctly you don't stop here, you come closer to focus and tweak until it's good there, possibly finishing off in focus if the seeing is good enough, looking at the appearance of the first diffraction ring. I didn't invent this, it's been done for ages, see e.g. 

http://www.astrophoto.fr/collim.html

 

But it requires a lot of experience (and luck). i can't do it well if I'm tired, to give just one example. And it's really hard to quantify anything you're doing and when you should stop, so a tool that works 100% of the time and gives you quantitative data is indeed invaluable.

 

But since the brain is definitely a neural network, should I be worried about infringing on your patent when I star test ? ;-)

You are correct, I totally agree.

My comment was essential about raising the level of awareness related to some limitations when using human inspection or alignment tolls based on the defocused star symmetric or/and optical path pattern. I would consider this a a fist coarse step, which is useful for starting close to the solution.

When doing collimation using only human qualitative evaluation it certainly important to do more, such as comparing intra and extra focal images as well as looking at the final PSF (in focus and near), on and off axis, for fine tuning.

Yet it maybe hard to spot a some small amount of aberration even under a good seeing, where its impact is the most, especially when the CO is offset.

Below an actual experience from a DK 20" scope in Chili under 0.6" of seeing:

 

OffsetCOandComa.jpg

 

The collimation was done by the local support team using the traditional star test (qualitative collimation).

As one can see the defocused star pattern looks quite good, the PSF (in focused) seems good as well, but if one looks more carefully we can spot some horizontal coma, yet this is hard to tell since the seeing effect is changing from frame to frame and it may be even harder if the seeing is just a bit worse.

 

The SR is 67%, below is the 3D plot of the scope MTF.

For comparison I have added also, below this, the DL MTF and the DL defocused star with for this scope.

 

I have to say, to be honest, that when I have started using WF analysis, first a Shark-Hartman (SH) analyzer and then our AI based WF sensor technology, I was surprised to see this quite common situation where the CO is mechanical offset.

Like most of us I was expecting and looking for a nice symmetrical defocused star pattern with concentric rings only to find out that the end result, the WF, was not as good as it could have been.

 

The dominant aberration of a misaligned optic is usually coma on axis. In the context of a perfectly mechanically centered obstruction (and baffles, ...) coma leads to an offset CO in the defocused star pattern.

When perfectly collimated the CO shadow should be centered indeed. If the CO is mechanically offset on the other hand this is not true. It is quite natural to correct the perceived offset CO shadow in the focused star when using the classical star test. As  matter of fact this is doable, but at the expense of some coma. Assuming the symmetry of the star pattern and shooting for a centered CO shadow during the collimation could be misleading.
The most important aspect is the intensity pattern gradient, it should be uniform (from an axis-symmetry stand point), which what the ITE tell us, (see my other post on this matter). That is more relevant than the pattern shape itself but under seeing conditions the scintillation may limit how well one can assess the intensity gradient, at least for short time exposures or when using an eyepiece.

Having say that before using any WF sensing technique I would recommend to aim for a symmetrical pattern (or using a collimation tool for that matter) as a coarse step. WF is used for fine tuning then.

 

SKW should be seen as a WF sensor tool without any dedicated hardware, such as a SH. When using meteorology to quantify a scope alignment one needs to be realistic since we'll always find some error, no scope is perfect.
One should manage expectation and consider the task relative to the local seeing which at the end of the day limit the long term exposure performance (excepted for LI).

 

As mentioned before doing WF analysis across the field (multi-star) at once opens the door for optimal collimation, basically active optics.

One can use this information to infer the mirrors misalignment parameters, angle and offset values, which in turn are used to control actuators (active optics) or provide quantitative guidance to the user (which knob to turn in which direction).

The goal of SKW is to eventually provide field dependent WF/aberration and correction feedback on top of quantitative scope performance data. We are working to implement this in future SKW updates after validation using the VLT collimation strategy.


Edited by Corsica, 11 June 2021 - 10:37 AM.


#42 FredOS

FredOS

    Messenger

  • *****
  • Posts: 457
  • Joined: 16 Feb 2017

Posted 11 June 2021 - 06:56 AM

Well, it would be great if this would be available for the summer - this is when there is more time to do of this fine tuning work. Looking forward to availability.



#43 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 11 June 2021 - 07:06 AM

It depends -- not everyone is doing long exposure astrophotography. If you're imaging planets at 60fps, getting a perfectly round fuzzy blob for long exposures isn't "good enough".

This is a very good point.

 

As mentioned you can set the seeing in SKW to any level you may like, including 0.

 

In the same context we are considering using eventually this technology to boost the performance of lucky imaging (LI) for planets.

The AI based WF analysis can be done with extended sources as well, like a planetary disk.

LI rate of success in the visible band is only few percents, depending of the threshold used to retain a frame, usually set to be near the DL.

 

If we would know the WF for each frame we could use this information for processing some of the frames (deconvolution) which are not too badly aberrated yet below the threshold. This would work as long as the related MTF does not exhibit extreme drops relative to the noise floor.

My guess is that we could then recover many more frame and raise the LI rate of success in the range of 30%, depending of the seeing while systematically correcting for any permanent (small) scope aberrations, such as TDE.


Edited by Corsica, 11 June 2021 - 07:08 AM.


#44 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 11 June 2021 - 07:27 AM

But since the brain is definitely a neural network, should I be worried about infringing on your patent when I star test ? ;-)

 I do not think you should be worry about this any time soon  wink.gif

The patent application is related to the unambiguous phase retrieval from intensity images (incoherent light), and alike, using engineered sources, such as a defocused star and their applications. This approach is not specifically related to NN, but more to the framework of inverse models, function approximation and regression. NN maybe be used as a tool but this is not a limitation.

 

Accessing the WF is key for many applications, such as active and adaptive optics or when making optical components (mirrors, lens, ...), especially in real time and across the field at once. For instance we could use this approach, instead of a hardware based WF sensor, for controlling live the figure of a mirror during its production, working at the center of curvature.

As an example, we are involved on an active optic solution for the 4m Turkish scope (DAG project) using this AI WF sensing approach taking images in the engineering scope field (off axis), outside the science field.

It is a quantitative technology, related to meteorology and on this regard I do not think that we should compare it with the human qualitative approach using a star test. Those are two different valid approaches to reach the same goal.

 

Although I do agree that the brain is using biological NNs I am not sure this is the all story (from a scientific stand point), there is still much more at play in natural intelligence than that, but that is a totally different topic...


Edited by Corsica, 11 June 2021 - 07:36 AM.


#45 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 11 June 2021 - 08:22 AM

Well, it would be great if this would be available for the summer - this is when there is more time to do of this fine tuning work. Looking forward to availability.

Beta testing is well under way. The SKW collimator version should be released for this summer, soon now.

COVID 19 did not help us to speed up the process but we are committed to do this asap while making the best product by leveraging beta testers feedback.
 



#46 sixela

sixela

    James Webb Space Telescope

  • *****
  • Posts: 17,955
  • Joined: 23 Dec 2004
  • Loc: Boechout, Belgium

Posted 11 June 2021 - 09:23 AM

This approach is not specifically related to NN, but more to the framework of inverse models, function approximation and regression. NN maybe be used as a tool but this is not a limitation.

How does the patent deal with things like WinRoddier (and the Roddier test in general) as prior art? Or is the crux that it's possible to do it in a computationally less expensive way?


Edited by sixela, 11 June 2021 - 09:24 AM.


#47 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 11 June 2021 - 10:06 AM

How does the patent deal with things like WinRoddier (and the Roddier test in general) as prior art? Or is the crux that it's possible to do it in a computationally less expensive way?

The Roddier's method (Curvature Sensing CS) uses the irradiance transfer equation ITE (a differential) equation which is solved through numerical non-linear optimization at run time using this direct model. The ITE is an approximation only valid under some conditions.

 

Instead our AI based WF sensing (AIWFS) learns the inverse model, which is known to exist as long as the defocus (phase diversity) is lager than the considered aberrations (Hickson & Burley - 1994). The model is built during the training phase beforehand, once for all, using synthetic data only computed from the scalar diffraction theory (including noise).

 

At run time there is no iteration nor any optimization for that matter since we already have learned the relationship between the relative intensity (irradiance) and the wavefront.

Because there is no analytic solution, in general, for the ITE we have expressed it instead using samples and machine learning in the form of a NN.

Computing the WF using an analytic equation (that we do not have) or the NN is basically the same thing. One can see the NN , after training, as the empirical model which was trained to output the expected Zernike coefficients as an analytic solution would have done.

Our NN is a feedback structure its outputs (in our case the annular Zernike coefficients) are computed directly in one pass with a demonstrated accuracy (in the lab) around few hundredth of a wave.

 

Also the AIWFS uses a single defocused star there is no need to consider intra AND extra focal images for the calculation as the CS does, at least not in the context of active optics. I should mention also that we have trained the NN to deal with some error on the defocus set by the user (the phase diversity) which is basically a aberration as any other ones. This means that we do not need to be super accurate when defocusing the telescope. In principal when using CS one should get both images (intra/extra focal) with exactly the same amount of defocus, with SKW we do not have this and there is only one image anyway.

 

So in short CS and AIWFS are two different approaches. The former solves each time the direct model while the latter does it once and then at run time directly outputs the solution using the learned inverse model.

 

As a summary, below two slides from one of my SPICE lecture on this topic:

 

DirectandInverseModels.jpg


 


Edited by Corsica, 11 June 2021 - 10:55 AM.

  • sixela likes this

#48 ChrisMoses

ChrisMoses

    Apollo

  • *****
  • Posts: 1,184
  • Joined: 22 Oct 2014
  • Loc: Fort Wayne, IN, USA

Posted 11 June 2021 - 05:09 PM

Any updates on availability, even as a beta tester?  I love my ONAG and can't wait to get some quantifications of my new set-up



#49 ChrisMoses

ChrisMoses

    Apollo

  • *****
  • Posts: 1,184
  • Joined: 22 Oct 2014
  • Loc: Fort Wayne, IN, USA

Posted 18 June 2021 - 11:42 AM

So, if I understand things correctly, a trained NN is required for each scope. Is that correct? If so, can you tell me if the tak fsq 106-edx4 will be one of those available?

Thanks

#50 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,186
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 01 July 2021 - 08:29 PM

So, if I understand things correctly, a trained NN is required for each scope. Is that correct? If so, can you tell me if the tak fsq 106-edx4 will be one of those available?

Thanks

You are correct, there is a model by scope. We can generate models for any scope (refractor and reflector). For now we have a library for reflectors since those are the primary target in the context of collimation, but there is no limitation for building models in the context of refractors, which will be available too.

We are working on the documentation and tutorial for the software release.
 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: Collimation



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics