Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

AI based wave front sensing and collimation

Collimation
  • Please log in to reply
624 replies to this topic

#1 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 22 February 2021 - 01:13 PM

Innovations Foresight is excited to introduce a patent pending AI based wavefront sensing technology (AIWFS) which provides an innovative powerful quantitative way to analysis telescope performances and collimation (alignment) status without any extra hardware!
It uses your own imaging camera in the process.

 

SkyWave (SKW) is a Windows based application which uses a defocused image of a star (actual or artificial) in the form of a FIT file.

SKW is on its final beta test phase.

 

For further information and details please visit our education page here:

 

https://www.innovati...opecollimation/

 

Below a SKW screenshot during the analysis of a 17" Corrected Dall-Kirkham telescope.

 

SKGScreenshot.jpg

 

Here is the summary of the initial analysis prior proper collimation of this scope:

 

CDK17SKW.png

 

The defocused star image (FIT file) on the left is the raw image from the telescope taken through a red filter, the second most left image is the simulated image reconstructed using the Zernike coefficients and related polynomials found by SKW mathematical model after adding the estimated seeing and the spider diffraction patterns. This is a monochromatic simulation, while the raw one is polychromatic with less contrast. Both images are structurally identical which shows how good the analysis and retrieved WF, as well as related aberrations, are even under seeing limited conditions.

Our technology has been tested against an interferometer and a Shack-Hartman (SH) sensor showing a rms accuracy in the order of 10 milli-waves rms (~5nm), or better (see references below).
The two last images are, from left to right, the scope PSF without any seeing (in space condition) and the WF error heat plot.

 

 

References: SPIE proceedings and videos.

 

Low cost wavefront sensing using artificial intelligence (AI) with-synthetic

Dr. Gaston Baudat

Proceedings Volume 11354, Optical Sensing and Detection VI; 113541G (2020) https://doi.org/10.1117/12.2564070
Event: SPIE Photonics Europe, 2020, Strasbourg, France

 

A star test wavefront sensor using neural network analysis

Dr. Gaston Baudat and Dr. John B. Hayes

Proceedings Volume 11490, Interferometry XX; 114900U (2020) https://doi.org/10.1117/12.2568018
Event: SPIE Optical Engineering + Applications, 2020, San-Diego CA, USA

 

More information will be posted after the beta test phase is over, excepted by the end of March 2021, on the product options.

Feel free to contact us at any time.


Edited by Corsica, 22 February 2021 - 01:26 PM.

  • George N, Yu Gu, Phil Cowell and 9 others like this

#2 FredOS

FredOS

    Messenger

  • *****
  • Posts: 457
  • Joined: 16 Feb 2017

Posted 22 February 2021 - 05:40 PM

I assume you will explain in more detail how one would use this technology in a practical way and what are the potential limitations. Is the objective via trial and error to increase the strehl ratio or targeting a wavefront error level ? Also, you seem to refer to using two cameras and a splitter.


Edited by FredOS, 22 February 2021 - 05:47 PM.

  • psandelle likes this

#3 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 22 February 2021 - 10:15 PM

I assume you will explain in more detail how one would use this technology in a practical way and what are the potential limitations. Is the objective via trial and error to increase the strehl ratio or targeting a wavefront error level ? Also, you seem to refer to using two cameras and a splitter.

Our AI based wavefront sensing (AIWFS) technology used by SKW processes FIT images from the user imaging camera using any software he may like to acquire them.
There is no need to any extra hardware, beside the user imager, focuser and SKW software.

 

There is another approach for retrieving a wavefront from a defocused star, this is known a curvature sensing (CS) it has been proposed by Roddier and Roddier in the context of AO for astronomy.

It requires TWO defocused images of the same star, usually in the same time (at least in AO context), before and after the focal plane. Which means the use of two extra cameras when using CS in AO configuration (cancellation of scintillation). Or at the very least moving the focuser in two different positions for exposing twice the same defocused stars long enough to average out the short term seeing.

 

SKW does not need any of this as mentioned above. Also SKW outputs the wavefront information at run time without solving any differential equation, like CS does, this task has been done beforehand during the training process of the artificial neural network, this is a fundamental difference with CS too. Our AIWFS approach can run at video rates if needed. Finaly SKW has a bulit in tolerance for quite large defocus error (unlike CS) on the star used for the analysis/collimation, in the calculation of the WF defocus error is just an abbe ration like the others.

SKW will come in different versions, at least two.

 

A basic one for collimation purpose only. This version will feature a collimation score (a number between 0 and 10) and related target, as shown below. The target expresses the level of collimation score (distance to center) as well as the direction of the dominant WF aberrations.

The goal is to center the black dot inside the green area (green = excellent, yellow = good, orange = fair, red = poor collimation score). The collimation score is computed from the wavefront data while taking in account the scope specifications and local seeing given by the user.

A score around 8 and above (green zone) means that for this scope and site seeing, user defined, the related level of collimation reached is at  the operational limit of this scope under the given seeing, more collimation effort will not change in any significant way the image quality under those conditions.

Of course the user can change the seeing level and therefore the resulting score adjusting the collimation sensitivity up to the DL.

In short the targeted WF error level is related to the seeing/scope and up to the user. The goal it to provide a practical and operational measurement as well as estimation of the scope performance in its context and condition of use.

Although SKW can be used to reach the DL (SR >=80%), even under seeing limited conditions, this goal may require quite some time and collimation effort with very practical little gain, if any, on the final image quality. Scopes whit apertures around 2" or more are seeing limited in most conditions and sites already.

 

BasicTargetPattern73.png

 

The basic version will be free with a integrated simulation mode such users can learn how to use it, then they will buy a mathematical model for their scope.

 

A more advanced, pro, version of SKW (for sale) will provide, beside the same collimation tool, many more information, such a aberrations (types and values), wavefront, SR, MTF, ... It will be essentially a wavefront sensor system which can be used on the sky or in the lab for meteorology too (telescope assembly and quality control, optic testing, surface polishing/figuring quantitative feedback, ..., for professional and amateurs alike).

The SKW screenshot in my description is the current beta version of the pro version.

 

As far as collimation in concern the tool provide a quantitative feedback in the collimation process. We will eventually add more capabilities like guiding the user during the collimation by looking at the changes between several corrections and resulting WF/score as well as more direct information to turn the actual knobs.

 

I hope this post helped answering some of your questions.
I'll release more information and examples in the coming weeks on this thread.


Edited by Corsica, 22 February 2021 - 11:29 PM.

  • George N, psandelle and Gary Z like this

#4 akulapanam

akulapanam

    Mercury-Atlas

  • *****
  • Posts: 2,922
  • Joined: 26 Aug 2012

Posted 23 February 2021 - 12:12 AM

Looks great!  Can't wait until it is available for purchase.



#5 FredOS

FredOS

    Messenger

  • *****
  • Posts: 457
  • Joined: 16 Feb 2017

Posted 23 February 2021 - 05:05 AM

Sounds very existing. You mention video vs single frame. I suppose that video would offer superior results in terms of minimising seeing related issues ?



#6 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 23 February 2021 - 12:14 PM

Sounds very existing. You mention video vs single frame. I suppose that video would offer superior results in terms of minimising seeing related issues ?

I mentioned that our AIWFS technology can run at video rates or faster to highlight the fact that this AI solution run time calculations are fast and straight forward using a feedfoward neural network. This is a significant advantage against classical approaches, such as CS, solving iteratively a differential equation. This means that the AIWFS is well suited for AO applications where speed is a key factor.

 

In the context of telescope collimation we want to minimize the scope aberrations from miss-alignment, not to correct for the short term seeing like AO does.

The good news is that seeing on "long" exposures, say 10 to 20 seconds, is averaged out, basically the resulting time integrated wavefront is a planewave, for a star, at the entry of the telescope pupil.

Therefore in SKW we process defocused star images taken under such condition, like 10 to 30 seconds, which is also useful to boost the SNR, if required. The NN was trained with various seeing levels, including its relation with the telescope actual aperture D. The key parameter is the ratio D/r0, where r0 is the Fried's parameter for a given seeing, some where between one to few inches. This approach makes SKW essentially immune to seeing in this wavefront calculation and related collimation feedback.
SKW will automatically do the necessary mathematics using your scope information and your local seeing estimation (FWHM) at the time collimation. The user provided FWHM value does not need to be very precise, a guess is good enough.
 


Edited by Corsica, 24 February 2021 - 08:19 AM.

  • George N, psandelle and xiando like this

#7 FredOS

FredOS

    Messenger

  • *****
  • Posts: 457
  • Joined: 16 Feb 2017

Posted 23 February 2021 - 02:02 PM

Thank you for the detailed explanations. Looking forward to the formal release.

#8 TxStars

TxStars

    Gemini

  • *****
  • Posts: 3,477
  • Joined: 01 Oct 2005
  • Loc: Lost In Space

Posted 23 February 2021 - 10:12 PM

Did not see, or perhaps I missed the computer specs?

How much CPU / GPU power is needed to run this?

Well run it in a short time, don't want to spend all day waiting ..



#9 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 24 February 2021 - 07:48 AM

Did not see, or perhaps I missed the computer specs?

How much CPU / GPU power is needed to run this?

Well run it in a short time, don't want to spend all day waiting ..

Good question.

 

Just the computing time itself (stars location, extraction, pre-processing and WF analysis) is a bout few seconds on an average laptop using only CPU, mine is a Lenovo T540p, not the latest greatest one by a long shot. I (still) run Windows 7 on this machine.

GPU are not required, I do not use it.

 

The time to load the frame from the imager is of course up to your imaging software you are using.

SKW, on request, can watch a given directory (folder) for new FIT images (this includes a filter for the file name, if any). When a new frame is available it will auto load and analysis it if it was set to do so by the user. Alternatively the user can load a frame manually too. SKW does not connect to any hardware and therefore does not need to deal with any driver nor related platform such as ASCOM, it works only with FIT files. Therefore it works with any imaging software as long as it can output monochromatic (B&W) FIT files (8 bits, 16 bits, or float format). If you use a OSC camera just convert one of the color channel, usually red to minimize seeing, in a monochromatic image. In most cases a luminance (L) frame can be used too but we recommend using a color filter to narrow the raw image bandwidth increasing the contrast when doable.

 

One should understand that running the trained neural network (NN) is very fast, the longest time in the process after a frame has been acquired and uploaded is to locate and preprocess the defocused star(s) in the frame. The NN is a feed-forward one, if I remember correctly the time it takes to actually run the NN code itself is in the order of 0.1s or less on an laptop like mine.

 

Building the training databases for a class of telescope/optics (learning, validation and test data) as well as the actual training of the NN is a different story. Those tasks are done beforehand by us on the cloud, it may takes days to weeks to do so, depending of the size of the databases and the cloud computing resources allocated, we usually work with at least several 100,000 to millions of samples.

However this is totally transparent for the user. When you buy from us a mathematical model for your scope(s) to run with SKW this one come in the form of an encrypted file which is decoded using your SKW local machine license key (SKW basic free version still requires a license key from us). The resulting process extracts the model data used by SKW for the related scope. The size of the model data file is in the order of few hundred MB only, and NN run time inside SKW is negligible.

 

It is worth noting that the SKW basic version (free) can analysis only one star at the time. There is a star selector which either automatically picks a star based to user preferences (say the one closest to the center for on an on-axis analysis for instance) or the user select manually the star (from a star list provided by SKW or with the mouse on the frame). The pro version of SKW will offer multi star analysis at once, within the same frame (assuming to have more than one star in the frame of course). This approach provides field dependent (on and off axis) aberrations, like field curvature, aberrations maps (2D and 3D), .... Below an example of a field curvature analysis, including the Petzval's surface, while testing on the bench in double pass an achromatic lens using 7 artificial stars, figure extracted from the SPIE proceedings [1].

 

AIWFS_FieldCurvature.jpg

 

At the top of the figure there is the raw image of the 7 defocused artificial stars (pinholes) arranged in a vertical line.

The bottom 2D plot is the field curvature for this achromat along this line.

In red the monochromatic @520nm achromat model curve plotted using OSLO optical design program fed with the Zemax data for this achromat, in black the AIWFS solution (7 points) using a green filter centered at 520nm.

The X (horizontal) axis is the field angle in degree (0 being on axis) while the Y axis is the curvature in microns. Both curves are quite close the difference can be traced back from production tolerances of such achromat and the use of a polychromatic source (green filter) for the AIWFS data acquisition. We could also make AIWFS mathematical models for polychromatic sources but this is more work while for most practical applications the difference is not enough to justify it.

[1]

 

Low cost wavefront sensing using artificial intelligence (AI) with-synthetic data

Dr. Gaston Baudat

Proceedings Volume 11354, Optical Sensing and Detection VI; 113541G (2020) https://doi.org/10.1117/12.2564070
Event: SPIE Photonics Europe, 2020, Strasbourg, France


Edited by Corsica, 24 February 2021 - 12:02 PM.


#10 Lead_Weight

Lead_Weight

    Gemini

  • *****
  • Posts: 3,241
  • Joined: 21 Jul 2016
  • Loc: Houston

Posted 25 February 2021 - 10:00 AM

This looks pretty amazing, and I look forward to trying it out on my Edge 11 at some point. My personal concern is not plugging in the telescopes information, but plugging in the seeing information since it could be subjective. Is there any reliable way to determine seeing? Is there any way for the software to "observe a star" and determine the seeing? How much of a factor does seeing play in dialing in the hardware collimation? Could you start the software out with a large 3-4"/px seeing and keep lowering it trying to dial in the collimation until it can get no better?


  • rferrante likes this

#11 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 25 February 2021 - 12:31 PM

This looks pretty amazing, and I look forward to trying it out on my Edge 11 at some point. My personal concern is not plugging in the telescopes information, but plugging in the seeing information since it could be subjective. Is there any reliable way to determine seeing? Is there any way for the software to "observe a star" and determine the seeing? How much of a factor does seeing play in dialing in the hardware collimation? Could you start the software out with a large 3-4"/px seeing and keep lowering it trying to dial in the collimation until it can get no better?

The seeing, in arc-second ("), provided to SKW has only a marginal impact on the model calculation and wavefront in the context of collimation.
Therefore you do not need to be very accurate about its value, just use the usual local average seeing will do, for most of us it would be around 2" anyway.

The main impact of the seeing value is in the collimation score, the better the seeing the lower the score for a given level of aberration.

The all idea of this collimation score (between 0 and 10) is to tell you when there is no added value trying to improve even more your collimation.

You can certainly shoot for the diffraction limit DL (setting the seeing value at 0" in SKW) but this has little sense if you are seeing limited though.

Anyway, SKW let you place the bar (setting the seeing value) at the level you like, but I would suggest to select the average local value during the collimation process, this leads a good sensitivity matching the scope performance for that seeing.

If you want to know how do your scope and related score look like under a different seeing, better of worse, you can do this at any time by changing the seeing value.

 

Below an example for a 17" scope before collimation. The most left image shows the scope PSF without any seeing (from space), the score is poor at 2.4. We can clearly see coma and astigmatism, as well as some spherical aberrations (the green circle is the limit of the Airy disk for that scope, core of the PSF).

The second most left image is the same scope with the same level of aberration under a 1" seeing, the score is already 5.5 (fair), we still see the dominant effect of the coma. The next image (from left to right) is with a seeing at 1.5" leading to a score of 6.6 (good), it is hard see any obvious distortions of the star shape, just a little of coma left.
The next image is for a seeing at 2" with a score of 7.3 and finally with a seeing of 2.5" the score is 7.8, almost perfect.

There are almost no visible differences, beside the size of the star, between a seeing of 1.5", 2" and 2.5" for this scope anymore even if it is poorly collimated when checked against the DL (no seeing).
We consider the collimation as near "perfect", for a given seeing, when the score hits 8, this is because even if you do not see much of any star distortion already with a score above 6, the type of aberrations may impacts differently the shape but also off axis performances may still be degraded.

We recommend after on axis collimation to use an off axis star and compare. SKW pro version will process all the stars in the filed for providing field dependent aberrations and scores (in the form of a map) at once.

 

CollimationScore_vs_Seeing.jpg

 

Let me know whether this post answered your question.


Edited by Corsica, 25 February 2021 - 12:36 PM.

  • psandelle, dcornelis, happylimpet and 1 other like this

#12 mikeyL

mikeyL

    Apollo

  • *****
  • Posts: 1,177
  • Joined: 17 Dec 2007
  • Loc: Longmont, CO, USA

Posted 25 February 2021 - 01:06 PM

Gaston,

 

You guys have been busy! This looks very cool, and from my excellent experiences with the ONAG and SkyGuard I know this will be an amazing product once it is available. I look forward to running this on my 2 OTAs at some point.

 

Probably too early to share, but do you have any information yet on the cost of the pro version?

 

Best Regards,

 

ML



#13 FredOS

FredOS

    Messenger

  • *****
  • Posts: 457
  • Joined: 16 Feb 2017

Posted 25 February 2021 - 01:08 PM

Do you need a database of scopes or database per type / aperture of scope ? I'm asking because I hope it will work with my plane wave cdk 12.5...

 

Thanks



#14 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 25 February 2021 - 01:47 PM

Gaston,

 

You guys have been busy! This looks very cool, and from my excellent experiences with the ONAG and SkyGuard I know this will be an amazing product once it is available. I look forward to running this on my 2 OTAs at some point.

 

Probably too early to share, but do you have any information yet on the cost of the pro version?

 

Best Regards,

 

ML

We are still in beta with people around the world for the SKW pro version. I do not have a final answer here but we may go for an annual subscription for this version. The basic version will be available first, sometime next month.
 


  • psandelle likes this

#15 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 25 February 2021 - 01:57 PM

Do you need a database of scopes or database per type / aperture of scope ? I'm asking because I hope it will work with my plane wave cdk 12.5...

 

Thanks

We can create a model for any scope. All we need is the scope aperture (diameter), focal length and central obstruction (relative to the aperture diameter, not the surface). If for any reason somebody has a custom request for a special setup, say a non circular aperture, we can do this too.

 

Here is the basic procedure, after download and installation of SKW you would first request a license key (even if the software is free of charge of the basic version).

This key unlocks SKW, then you can start using the simulation mode. When ready to buy a model you will request for a model license using the instrument parameters defined in the instrument setting tab for this scope (you can have several scopes and models within the same SKW software installation, you pay per model).

We send you the model has an encrypted file which will be decrypted by your installed version of SKW. Licenses and keys are machine dependent.



#16 rockstarbill

rockstarbill

    Voyager 1

  • *****
  • Posts: 11,480
  • Joined: 16 Jul 2013
  • Loc: United States

Posted 26 February 2021 - 11:45 PM

What is the ballpark cost of the model for a single scope? I.e I have 2 AGO iDK scopes. 10" and 12.5", roughly what would I be looking at to use this for Collimation?

#17 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 02 March 2021 - 03:44 PM

What is the ballpark cost of the model for a single scope? I.e I have 2 AGO iDK scopes. 10" and 12.5", roughly what would I be looking at to use this for Collimation?

We'll offer different types of model. A collimation model would be in a range of $200 or so. Customers will have the opportunity to buy a bundle for several models at discounted prices. This type of models are mainly designed to be used with SkyWave lite, which itself will be free (with a license key).

More advanced models with more spatial resolutions, or specific figures of merit, to output more information with higher accuracy will also be available to be used with SkyGuide pro. We'll also offer custom models for testing optical systems, other than a telescope, like mirror/lens figures for supporting the production/polishing process or quality control, or other relevant applications. We'll be opened and flexible to provide solution for various applications, on demand.
If anybody has any suggestion or demand for using this technology for a specific application, please feel free to contact us, me, at any time.

 


  • xthestreams likes this

#18 rockstarbill

rockstarbill

    Voyager 1

  • *****
  • Posts: 11,480
  • Joined: 16 Jul 2013
  • Loc: United States

Posted 02 March 2021 - 04:27 PM

If you can confidently say that $200 leads to exceptionally perfect digital collimation, with a money back guarantee then I'd say that's reasonable. RC designs usually give people fits, so I could see this being good for them. Similarly some people are very OCD about gear and they would likely appreciate this as well.

The pricing is a bit steep for one computer to collimate one telescope. But if it's that deadly accurate and relatively easy to use then one could conclude it to be worth the cost.

I'd have to try it though.

Edited by rockstarbill, 02 March 2021 - 04:28 PM.

  • xthestreams likes this

#19 akulapanam

akulapanam

    Mercury-Atlas

  • *****
  • Posts: 2,922
  • Joined: 26 Aug 2012

Posted 02 March 2021 - 10:40 PM

Hmm I think the $200 should get you a bit more than just collimation especially since so many of the scopes will be the same. Also the challenge people have with the RC is more about how to fix the problem vs seeing it is a problem in my view
  • Midnight Dan likes this

#20 xthestreams

xthestreams

    Messenger

  • -----
  • Posts: 439
  • Joined: 18 Feb 2020
  • Loc: Melbourne, Australia

Posted 08 March 2021 - 09:33 PM

sign me up for testing! I've been using the GoldFocus mask on my RCT and it's not an terribly enjoyable experience!

 

Also, one thing to note is that as I am collimating, the star moves, how would the software handle this? 

 

Similarly, on and off axis collimation with an RCT usually requires that the star(s) under test being variously be on axis (primary) and in the corners (secondary). Can it deal with both degrees of freedom or is it better suited to CDKs and SCT's with spherical mirrors?

 

Final question - the "trick" with collimation of long focal length trains is often dealing with the other elements of aberration eg: focal length being out of spec (mirror spacing), focuser centricity, focuser collimation, sensor tilt etc - are these all going to need to be eliminated before the tool has any useful data to share?

 

Not trivialising the great work you've done Gaston, it's a gnarly problem and I'm trying to understand what to expect.


  • rockstarbill likes this

#21 Corsica

Corsica

    Vendor (Innovationsforesight)

  • *****
  • Vendors
  • topic starter
  • Posts: 1,197
  • Joined: 01 Mar 2010
  • Loc: Columbus Indiana - USA

Posted 10 March 2021 - 12:27 PM

sign me up for testing! I've been using the GoldFocus mask on my RCT and it's not an terribly enjoyable experience!

 

Also, one thing to note is that as I am collimating, the star moves, how would the software handle this? 

 

Similarly, on and off axis collimation with an RCT usually requires that the star(s) under test being variously be on axis (primary) and in the corners (secondary). Can it deal with both degrees of freedom or is it better suited to CDKs and SCT's with spherical mirrors?

 

Final question - the "trick" with collimation of long focal length trains is often dealing with the other elements of aberration eg: focal length being out of spec (mirror spacing), focuser centricity, focuser collimation, sensor tilt etc - are these all going to need to be eliminated before the tool has any useful data to share?

 

Not trivialising the great work you've done Gaston, it's a gnarly problem and I'm trying to understand what to expect.

Many good questions!

 

I'll attempt to answer some.

SKW is a stand-alone software which takes FIT files as inputs, therefore it does not connect to any hardware, nor does it control the mount (there is no driver involved). FIT files are either manually selected or automatically loaded by SKW when a new frame becomes available in an user defined folder.
Tracking/slewing/centering and imaging is done by what ever application the user may have.
Having said that SkyGuide and SkyGuard (SKG) will both eventually support the SKW-lite capability as a specific tab in the GUI. In this case our full frame guiding technology will be used to track the defocused star and recenter it in between collimation adjustments, as long as the star remains inside the imager's FOV. SKG can guide with objects of any type and shape as long as there is something else than noise in the frame, it does not need a guide star nor does it use any centroid.

As far as collimation (reflector) goes there are two aspects, one in the alignment of the mirror optical axes (offset and tilt in the general case), the other one in the correct spacing between the mirrors.

With a secondary spherical mirror we do not care about optical axis tilt (at least as long as vignetting is not an issue), only offset matter since any radius of a sphere is an optical axis. In this context the tilt/tip of the secondary is used to control its offset for bringing its optical axis (one of the mirror radii) coincidental with the primary mirror optical axis. This makes collimation much easier since we have only 3 degrees of freedom to deal with, tilt/tip and spacing.

With a RC scope, for instance, we have two hyperbolic mirrors having each an unique optical axis (unlike a sphere). This means that we have now 5 degrees of freedom to deal with, offset (X/Y), tilt/tip and spacing. To make matter worse most amateur RCTs do not provide any secondary (and primary) offset adjustment, only tilt/tip.
As a result if the secondary mirror has any offsets relative to the primary (so far I assumed that the image plane in squared with the primary mirror) the inevitable induced aberrations (mainly coma) can only be mitigated by tilt/tip adjustments.
The good new is that tilt/tip of the secondary (relative to the primary) can correct most of the offset error, up to some limit though, for making the scope DL (SR>80%). This is one of the reason why amateur RCTs usually do not offer any offset adjustments, unlike high end scopes (RCOS, ...) do. I have seen many RCTs with secondary mirrors and/or mounts offset by few millimeters, this is a very common occurrence.
The trick, with an offset secondary, is that such DL collimated scope will exhibit a de-centered central obstruction shadow in the defocused star pattern. When doing collimation we usually aim to centrer the secondary shadow, either during the initial day time coarse collimation step, using lasers and/or the Takahashi collimation telescope and similar tools, or during the night fine collimation step with a defocused star. This can make RCT collimation confusing and difficult without having direct access to the telescope actual optical performance as a feedback (aberrations, wavefront, ...), see an example below.

Accessing the telescope related wavefront (WF) and therefore its aberrations by types and magnitudes not only provides quantitative information it also allows to clearly separate the problem in interdependent parts. I have collimated many RTCs using a Shack-Hartman (SH) WF analyzer and I can tell this makes a fundamental difference in the process, speed and quality of the result.
Collimation using the WF is a quite different approach than the traditional collimation. However using the WF makes sense during the fine collimation step, which I would consider as the actual optical collimation, the coarse one is all about the mechanical alignment of the optical components, mounts and baffles. From an optical stand point this is a coarse collimation but it is an important step which should be done first whatever the tool one may use for the optical collimation, with an actual star on the sky or an artificial one in the lab.

First, especially with a RCT, you want to make the two mirror spacing right. Looking at the scope actual focal length provides a general idea but since there are inevitable tolerances on the mirror figures the resulting FL is usually off by few percents relative to the scope specification.

In professional telescopes (RCOS to pick an example) the master optician will edge the side of each primary mirror with the back working distance (BWD) value between the primary back and the image plane, this is the key parameter. When proper spacing is archived than the FL may be (and will be) different than the nominal one and this is fine. Using the FL as a predictor for mirror spacing collimation is useful for a coarse adjustment step, but not enough.
There are various optical methods for checking for the mirror spacing with higher accuracy, one is the use of a dedicated Ronchi ocular, PWI uses this for their scopes, another one is the WF. From a WF one can compute aberrations, typically one may want to use the Zernike polynomials.

When the Zernike primary spherical (balanced 3rd order spherical) is as minimum as possible than the mirror spacing is optimal. The beauty of the Zernike polynomials is that one can look at a give type of aberration, for a given propose, regardless of the others, one can separate them. This can be traced back to the orthogonal nature of the Zernike decomposition, a very useful property.

 

It is important to understand that SKW is a true WF analyzer, we have shown (see SPIE proceedings and conferences, initial post) that our technology competes with interferometry and SH WF sensors and alike. SKW provides the WF and related aberrations without the need of any dedicated WF sensor using your imager instead. Traditional WF sensor/analyzers, such as a SH FW sensor, cost at least several $1000 and require removal off the user setup to be replaced by the WF sensor hardware. SKW offers the same performances at the fraction of the cost in the order of few $100 without touching the user setup at all. SKW Lite and Pro version both use the same WF engine the difference begin the GUI and level of information/capability offered. Yet even the basic version (SKW Lite) is providing WF driven collimation feedback.

I am working on some tutorial material for SKW, I do not want to add too much more on this already long post, but here is an example showing what value a WF driven collimation adds.
As I mentioned before there are many amateur level RCTs with very good optics facing some mechanical limitations, such as mirror offsets.
Below the defocused star image of a RCT (taken under seeing limited conditions) having an offset secondary mirror and central obstruction. This scope is well collimated and DL (SR = 99.43%):

RCTOffset.jpg

One can clearly see that the secondary obstruction shadow is not centered and some of the rings are not concentric, yet the scope is perfectly collimated, I have faced this situation may time over.
Below its WF retrieved using our AI based WF sensing technology (to know more about WF please visit our education page at https://www.innovati...opecollimation/ ):

RCTOffsetWF.jpg

 

Now if we do not have access to the scope WF and related aberrations/SR one may conclude that scope is badly collimated and need some adjustment.

Without any other feedback we would usually aim at centering the secondary obstruction shadow while making the outer rings as concentric and circular as possible, using the classical qualitative collimation method. Below is the result after tilt/tip correction of the mirrors:

RCTOffsetTilt.jpg

 

This looks much better than the first initial defocused star image. However the resulting WF analysis tells us a different story:

RCTOffsetTiltWF.jpg

 

The collimation adjustments have creates about 0.12 wave rms of vertical coma leading to a SR = 57.84%, we can see below the comatic resulting PSF, the scope is not DL limited anymore:

 

RCTOffsetTiltPSF.jpg

 

I think this may illustrate the value of using WF (or similar tools) and aberrations to express collimation results.

This kind of situation can make RCT collimations very difficult and frustrating.

As you mentioned there maybe many difference sources of error in the optical paths. We already talked about using the Zernike polynomial spherical aberration to set the correct spacing between both mirrors, which I would recommend to do first, solving one problem for itself. Another good example would be about any possible tilt of the image plane, for any reason.

An RCT perfectly collimated does not have any primary coma in the field (ON or OFF axis), this is one of the fundamental properties of RC scopes (assuming correct spacing between both mirrors though). The RC design trades coma for astigmatism, therefore without any corrector a RCT exhibits astigmatism off axis as well as field curvature (defocus), but no coma. Since having access to the WF allows to get each aberration for itself one can use this on our advantage when collimation a RCT, for instance.

Even if the imager is tilted resulting to the centered star on the chip be off axis relative to the scope, one can still collimate it looking only at the coma aberration term from the WF. If we are off axis there may be some astigmatism but we do not care about it just yet a this stage, we just collimate the scope until the coma is removed (remembers RCTs have no coma on AND off axis).
Then we can look at the astigmatism (as well as field curvature) across the field (in the comers) if it is not symmetrical we know that our imager is off axis indeed and we can act accordingly, say by adjusting a tilt/tip correction device in the optical path.
Such a strategy, looking by type of aberrations for solving one problem at the time, can only be implemented if one has access to the WF/aberration in a quantitative way, this is a fundamental difference versus traditional collimation. It s also the way optician aligned complex optics. For instance we have the DoD as one of our customer using a SH WFS (StarWave Product) for collimation, this one was bought before the design of our AI based WF technology but it does the same thing providing quantitative WF information and aberrations.

I hope this, a big long post, provides more inside on the value of using quantitative aberration feedback for collimation.
 


Edited by Corsica, 11 March 2021 - 06:21 AM.

  • psandelle, BobT, akulapanam and 7 others like this

#22 xthestreams

xthestreams

    Messenger

  • -----
  • Posts: 439
  • Joined: 18 Feb 2020
  • Loc: Melbourne, Australia

Posted 10 March 2021 - 04:34 PM

Amazing! I have a GSO RCT that is in the final 5-10% of perfecting collimation and I can’t wait to get started testing this product. Looking forward to it Gaston.


Edited by xthestreams, 10 March 2021 - 09:36 PM.


#23 xthestreams

xthestreams

    Messenger

  • -----
  • Posts: 439
  • Joined: 18 Feb 2020
  • Loc: Melbourne, Australia

Posted 10 March 2021 - 09:43 PM

I've just read the post on your website. Twice. All I can say, that PhD of yours was well earned! Mind blowing (and brain crushing) work. I don't think I will ever be smart enough to understand more than 10% of it. Genius level stuff.


  • carver2011 and Myk like this

#24 Gleason

Gleason

    Viking 1

  • *****
  • Posts: 519
  • Joined: 03 Jan 2013
  • Loc: SF Bay Area

Posted 19 March 2021 - 05:10 PM

I assume that this can also be used to make very precise tip-tilt adjustments for the popular CMOS cameras that have that capability.   Can I also check the collimation of my refractor?  Sorry if I missed that in the thread. 

 

jg



#25 Peteram

Peteram

    Mariner 2

  • *****
  • Posts: 256
  • Joined: 03 Sep 2016

Posted 06 April 2021 - 08:57 AM

Any updates on availability yet?




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: Collimation



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics