Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Interferometry Comparison of Pre-Synta and Synta C11s and C8s

catadioptric Celestron SCT
  • Please log in to reply
20 replies to this topic

#1 glmorri

glmorri

    Sputnik

  • *****
  • topic starter
  • Posts: 30
  • Joined: 24 Feb 2013

Posted 13 December 2017 - 06:03 PM

An entertaining and educational cloudy night activity is to go through the interferometry tests of various telescopes.   Recently, while trying to decide whether to purchase a Celestron C11 SCT, I found myself looking at C11 interferometry as well as searching out opinions on the internet.   EdgeHD versus standard C11?  Are post Synta acquisition SCTs optically superior to pre-Synta Celestrons?  Is the C11 a meaningful improvement over a C8 in light-gathering and resolution?  Will greater sensitivity to seeing largely nullify the optical advantages of the C11 at my location?   Opinion differed greatly, and even objective interferometry test results were highly variable.  I decided that a summary compilation of test results for C8s and C11s would be helpful.

 

I simply went through the major interferometry test sites on the internet and found the highest reported Strehl value, regardless of wavelength.  Only Strehl values without any correction/removal of detected spherical aberration, coma or astigmatism were used (tilt and defocus may have been removed).  Most of these were for green light.

 

Trying to determine whether an SCT was of Synta origin wasn’t straightforward.    I decided that only the Celestron SCTs with Synta style back cell would be classified as Synta (see picture).   This means that some older Syntas may have been misclassified, but still permits looking at the often-expressed belief that the more recent the Synta Celestron SCTs have superior optics.  Strehls were omitted if there was no picture of the back cell, or other indication of its origin (e.g., Edge).

 

These Strehl values come from three sources: 1) Wolfgang Rohr/Hassfurt, 🇩🇪 (http://r2.astro-fore...astrofotografie),  2) fidgor.narod/Moscow, 🇷🇺  (http://fidgor.narod.ru/Observers/test.html) or 3) AiryLab/Gréoux les Bains, 🇫🇷 (https://airylab.com/astronomy-test-reports/)

 

 

C11 and C8 Strehl ratios: overall averages and standard
deviations as well as for each interferometry test lab.

 

 

C11s (Strehl)                pre-Synta   Synta*  EdgeHD

 

AiryLab average                   0.931   0.893      0.853
n                                             3          4             2
standard deviation               0.012    0.092     0.095
   
fidgor.narod average            0.802    0.902     0.926
n                                             4         16            4
standard deviation               0.135    0.049     0.029

 

Wolfgang Rohr avg               0.949   0.962     0.952
n                                              4          6            1
standard deviation                0.029   0.015
*****************************************************************  
   
Overall C11 average              0.890   0.914     0.909
n                                             11        26            7
standard deviation                0.103   0.054     0.059
   
   
   

C8s    (Strehl)                  pre-Synta   Synta*  EdgeHD

 

AiryLab                                   0.957     0.96     0.96
n                                                1           1          1

 

fidgor.narod average               0.698    0.901     0.878
n                                                 4         17          4

standard deviation                   0.198   0.044     0.083

 

Wolfgang Rohr average            0.962    0.965
n                                                 3           1
standard deviation.                   0.022 
*******************************************************************  

 

Overall C8 average                    0.837    0.904   0.894
n                                                  8          19          5
standard deviation                     0.181     0.046  0.065

 

 

*including EdgeHD   

 

 

There is little difference between the pre-Synta and Synta C11s averages (Strehls: .890 vs .914).  However, the variability (standard deviation) of the pre-Syntas is roughly twice as much as the Synta Celestron SCTs (.103 vs .054).  This suggests that manufacturing consistency and QC has improved.  The widely accepted manufactures’ criteria for diffraction-limited telescope performance is a Strehl of .8 or higher.  Improved consistency in manufacturing means that there are fewer ‘duds’, but possibly fewer ‘exceptional’ SCTs as well. 

 

The picture for the C8s is a little different.  The variability of the pre-Synta C8s is, again, greater than for the Synta Celestrons (standard deviations: .181 vs .046).  Furthermore, the average Strehl for the pre-Synta C8s is lower than the Synta Celestron C8s (.837 vs .904).   If these differences are real, it suggests that more care was taken in the manufacture of pre-Synta C11s compared  to C8s, but that both are made to the same standards in the Synta era. 

 

The Edge SCT test results were included in the Synta category, but were also looked at separately.  Edge SCT average Strehls  did not appear different from all Syntas for either the C8s (.894 vs .904) or the C11s (.909 vs .914).  This is not surprising, since the purpose of the Edge optics is to correct off-axis aberrations inherent in conventional SCTs, but interferometry evaluates optics on-axis.  Indeed, if we compare on- and off-axis Strehl ratios for a conventional SCT  (figure 9.6, Telescopes, Eyepieces and Astrographs) versus an aplanatic Flat-Field SCT (figure 9.17) that “…differs only slightly from that of Celestron’s EdgeHD” (p. 256), we see that the Edge design has a slightly lower (-.08) Strehl on-axis, but a better and much wider acceptable performance range off-axis

 

Are these C8s and C11s representative of all Celestron SCTs?  The numbers would suggest not, as there were fewer tested C8s (27) than C11s (37).   This doesn’t reflect the production numbers, but may result from the willingness of owners/ buyers of the more expensive C11s to pay for interferometry.  Also, there were far fewer pre-Synta than Synta ‘scopes being tested.   Access to interferometry is a relatively new.  Understandably, there seems to be less interest in paying to have an old ‘scope tested. 

 

Differences in the average Strehl values reported by the test sites may be partially explained by differences in test equipment and protocols.  In my inexpert opinion, Wolfgang Rohr’s methodology seems to be comparable to others, and, of itself shouldn’t explain the difference between his ratings of the C11 and the other test sites.  What accounts for his higher Strehls of the tested C11s?  Possibly it is a sampling issue.  His test reports suggest that his clients are often demanding amateurs who already possess an excellent instrument that is being tested as part of a repair, or who just want verification that their prized telescope is as good as they believe (or, perhaps, are obsessing over an imagined defect—something that many amateurs can relate to, myself included).  On the other hand, the fidgor.narod site often indicates that theirs are new telescopes being tested prior to sale, or used ‘scopes being evaluated as part of a re-sale.  Consequently, theirs may be a more representative sample of Celestron SCTs. 

G. S. Smith, R. Ceragioli & R. Berry Telescopes, Eyepieces and Astrographs: Design, Analysis and Performance of Modern Astronomical Optics, Willmann-Bell, 2012.

Attached Thumbnails

  • IMG_1435.JPG

  • JMP, TG, Live_Steam_Mad and 11 others like this

#2 Cpk133

Cpk133

    Apollo

  • -----
  • Posts: 1318
  • Joined: 14 Mar 2015
  • Loc: SE Michigan

Posted 13 December 2017 - 09:13 PM

Nice job.  Can you make a histogram for each?   It must have taken a bit of time to assemble the data.  It would be interesting to see how the same scope would test at each place or an R&R for any of these test setups.


  • Live_Steam_Mad likes this

#3 beanerds

beanerds

    Viking 1

  • -----
  • Posts: 985
  • Joined: 15 Jul 2008
  • Loc: Darwin Australia

Posted 14 December 2017 - 03:27 AM

Thanks for the post , very interesting .

 

Any chance of this being done using C9.25's as I have a pre Synta XLT and I would estimate it to be about as good as the best here .

 

I once owned a Takahashi M210 at the time of buying my C9.25 ( had the cash and was interested in seeing if the hype was true ? ) and tested them side by side visually many , many times on most objects using TV eyepieces  and the C9.25 is that good I sold the Takahashi .

 

Thanks in advance .

 

Beanerds .


  • Mark Harry and Live_Steam_Mad like this

#4 luxo II

luxo II

    Apollo

  • -----
  • Posts: 1082
  • Joined: 13 Jan 2017
  • Loc: Sydney, Australia

Posted 14 December 2017 - 08:32 AM

What is the sample size for each type of scope ? If the sample size is 1 the data is meaningless frankly.

To achieve a desired level confidence that the results are within a specific uncertainty (ie accuracy) requires a number of samples in each case ...

Edited by luxo II, 14 December 2017 - 08:35 AM.


#5 TG

TG

    Vanguard

  • *****
  • Posts: 2372
  • Joined: 02 Nov 2006
  • Loc: Latitude 47

Posted 14 December 2017 - 09:40 AM

What is the sample size for each type of scope ? If the sample size is 1 the data is meaningless frankly.

To achieve a desired level confidence that the results are within a specific uncertainty (ie accuracy) requires a number of samples in each case ...


Actually reading the OP's post would be helpful.
  • eros312, Procyon and outofsight like this

#6 rmollise

rmollise

    ISS

  • *****
  • Posts: 22874
  • Joined: 06 Jul 2007

Posted 14 December 2017 - 10:16 AM

 

What is the sample size for each type of scope ? If the sample size is 1 the data is meaningless frankly.

To achieve a desired level confidence that the results are within a specific uncertainty (ie accuracy) requires a number of samples in each case ...


Actually reading the OP's post would be helpful.

 

 

Actually, reading the post doesn't help. He says he went to "the major interferometry test sites on the internet and found the highest reported Strehl value." Which sites? How many? Why choose the highest rather than an average?

 

The results are interesting, and pretty much confirm what most of us believe. But that is it..."interesting." ;)



#7 glmorri

glmorri

    Sputnik

  • *****
  • topic starter
  • Posts: 30
  • Joined: 24 Feb 2013

Posted 14 December 2017 - 10:40 AM

I appreciate the feedback.  I wish I had thought to include the 9.25 scopes when I was extracting the numbers from the test sites.  I have been intrigued by its reputation as having superior optics (resulting from a slightly longer FL on the primary mirror?).   I will find time to go back through and will post those numbers.  Thanks for the suggestion.

 

Grant


  • beanerds likes this

#8 Wildetelescope

Wildetelescope

    Apollo

  • -----
  • Posts: 1460
  • Joined: 12 Feb 2015
  • Loc: Maryland

Posted 14 December 2017 - 01:09 PM

 

 

What is the sample size for each type of scope ? If the sample size is 1 the data is meaningless frankly.

To achieve a desired level confidence that the results are within a specific uncertainty (ie accuracy) requires a number of samples in each case ...


Actually reading the OP's post would be helpful.

 

 

Actually, reading the post doesn't help. He says he went to "the major interferometry test sites on the internet and found the highest reported Strehl value." Which sites? How many? Why choose the highest rather than an average?

 

The results are interesting, and pretty much confirm what most of us believe. But that is it..."interesting." wink.gif

 

The OP does list the sources of the measurements and the sample number N, for each category of scope.  However, as some have noted, analysis of the presented data makes a number of assumptions, some of which include:  1. the sample distribution is Normal, or in other words can be treated as more or less random.   2. the number of samples is sufficient for the statistical analysis to be meaningful, and 3. Each of the testing houses can be considered essentially identical with respect to the quality of their testing.  There are of course other things I am sure I miss.  The point is that it is difficult to say much at all regarding variability reflected by the standard deviation given the information presented.

 

For example, if we look at the C11 data generated by a single test house, Fidgor.narod, we see a 3 fold reduction in the standard deviation when comparing pre synta and post synta scopes.  However, we see that this reduction corresponds to a 4 fold INCREASE in the number of samples tested.  Can we say that synta had better quality control than those made in the US? Or are we simply seeing the effect having more samples to evaluate?  

 

Conversely, if we compare results from all three test houses for the pre synta C11's(where sample sizes are comparable), we see that Airy labs and Rohr's lab Standard deviations are MUCH smaller than the Russian house.  Does that mean that Fidgor.Narod got a bad set of samples?  Or is there something different about their testing procedures? 

 

What I see, is that the coefficient of variation for the measurements ranges from 5-10% which is not at all unreasonable for a mass production process(5% is obviously better).  What I take from the data as a whole is that Celestron's claim of Diffraction limited or better is generally achieved, with the occasional outlier.  And that is not a bad thing to know. However I do not think that you can go much beyond that.

 

Certainly worth the post, and I thank the OP for putting things together, but I would caution strongly against over interpretation.  As Uncle Rod Says.... Interesting....

 

Cheers!

 

JMD


  • Rustler46 likes this

#9 treadmarks

treadmarks

    Viking 1

  • -----
  • Posts: 959
  • Joined: 27 Jan 2016
  • Loc: Boston MA

Posted 14 December 2017 - 02:49 PM

Appreciate the summary of testing results. When I've looked at some of these sites directly, they are not very user friendly.

 

These tests confirm what I've heard before, the C8 averages around 1/6 PV. Good, but just short of great. It strikes a balance of performance, price, and practicality that seems like a good strategy to become the best-selling telescope of all-time.

 

Given its track record, it would be odd if the C8 had poor optics. I think it's safe to dismiss such claims as people with axes to grind.


  • Asbytec and beanerds like this

#10 Eddgie

Eddgie

    ISS

  • *****
  • Posts: 24670
  • Joined: 01 Feb 2006

Posted 14 December 2017 - 06:31 PM

I won't get into the mechanics of statistics. I know little about the subject.

 

What I do know is that I have over the decades owned a lot of SCTs and this is what I would say about them:   They vary a great deal in quality and the distribution is a bell curve has always seemed to me to be centered in the .9 range. 

 

Sadly, a telescope with a 33% obstruction and .8 Strehl mirrors, and 85% transmission is a very poor performer.  This is why we read so many reports of C8s being beat by 4" Apos on planets, and very few will beat a top end 5" Apo on planets.

 

I had serious doubts that the change in manufacture location would make a substantial difference.  It is impossible to build telescopes with optics this size on a large scale and get a consistently much higher Strehl than this.  It just takes far to much labor to budge average quality from .90 to .95.  They would either have to charge more for them or loose money making them. 

 

Nothing surprising here.  It is a volume business model working in an arena were we measure absolute quality in 1/100ths of a single wavelength of light and those two facts are quite in conflict with one another.    


Edited by Eddgie, 14 December 2017 - 07:05 PM.

  • bobhen, JMP, Live_Steam_Mad and 2 others like this

#11 moshen

moshen

    Apollo

  • *****
  • Posts: 1275
  • Joined: 17 May 2006
  • Loc: San Francisco, CA

Posted 14 December 2017 - 07:55 PM

Interesting data, thanks for sharing!



#12 TG

TG

    Vanguard

  • *****
  • Posts: 2372
  • Joined: 02 Nov 2006
  • Loc: Latitude 47

Posted 14 December 2017 - 08:15 PM

 

 

What is the sample size for each type of scope ? If the sample size is 1 the data is meaningless frankly.

To achieve a desired level confidence that the results are within a specific uncertainty (ie accuracy) requires a number of samples in each case ...


Actually reading the OP's post would be helpful.

 

 

Actually, reading the post doesn't help. He says he went to "the major interferometry test sites on the internet and found the highest reported Strehl value." Which sites? How many? Why choose the highest rather than an average?

 

The results are interesting, and pretty much confirm what most of us believe. But that is it..."interesting." wink.gif

 

Rod, to you too I would say: please (re-)read the original post. The sites are listed there and are pretty much the only ones who regularly test amateur scopes. Their test equipment and methodology may differ but the OP was careful to not compare numbers between them and only show the difference for each scope group. One hopes that each lab's methodology is consistent enough that such comparisons make sense.

 

IMHO, there are concerns, already acknowledged by OP, but to call them out, I think they are these:

  

  1. Selection bias: The labs whose test results are quoted are not picking samples off the production lines. They are not even a dealer testing scopes they receive. Rather people are sending them scopes to test. When this happens, the telescope tested tends to be on the extremes: either very good or very bad.
  2. Smallish sample size: Even though the OP has been diligent in providing standard deviations, the definition of standard deviation (σ) for normal distributions means that any particular scope in the tested population has a 68% chance of being within ±σ of the average. E.g. for one sample set with average 0.926 and σ=0.029, you have a mere 68% chance that the actual values lies between 0.897 and 0.955. Further, there's a 95% chance (the one we'd like) that the actual value lies between ±2σ. But this means a larger range: 0.868 to 0.984. Even with a larger sample set, σ may still be large but that would just mean that there is a large variance in quality.

Even with the smaller data set, I think that the OP showed that the quality of C11s has tended to stay the same while C8s improved slightly moving from US made to Synta, while EdgeHD didn't bring any great increases in quality. For this I thank the OP.

 

Tanveer.


  • JMP, Live_Steam_Mad and fred1871 like this

#13 Cpk133

Cpk133

    Apollo

  • -----
  • Posts: 1318
  • Joined: 14 Mar 2015
  • Loc: SE Michigan

Posted 14 December 2017 - 09:21 PM

If you want to draw any conclusions about the averages you need to do a 2 sample t test.  If you want to compare the std Dev between samples, you can use an f test.  The sample sizes are indeed small and I doubt you can draw any conclusions with a high degree of confidence.  All of this assumes that the measurement systems are repeatable or capable of discriminating between good and bad.  I wouldn't put any faith in any of the data unless I knew that the measurements were repeatable.  Using Shanin Blackbelt methodology, which is really simple and practical, you would do an Isoplot to evaluate the measurement system relative to sample variation.  To do an isoplot, you measure each sample, randomize them, and measure again.  The delta between the first and second reading of each sample represents the measurement system error.  The delta between samples represents the process error or in this case, the optical figure.  The measurement error should be at least 6x less than the difference in optical figure.  Just one simple way to analyze the system.  The frequency distribution of the measurements from individual inspection houses would be interesting to see.  I would not combine data from the different inspection houses, the combined data won't be normally distributed.  As I stated in a previous post, id love to see how the same sample would measure at each lab.  


Edited by Cpk133, 14 December 2017 - 10:29 PM.


#14 Cpk133

Cpk133

    Apollo

  • -----
  • Posts: 1318
  • Joined: 14 Mar 2015
  • Loc: SE Michigan

Posted 14 December 2017 - 09:53 PM

I'll also say that it's a shame that this "industry" is so small.  There's no economy of scale.  In the auto industry, the investment in measurement systems is astronomical, no pun intended.  An interferometer is chump change.  I guess you don't put your whole family in a Celestron or a Meade and go 80mph so they can get away with it.



#15 TheFacelessMen

TheFacelessMen

    Viking 1

  • -----
  • Posts: 514
  • Joined: 15 Sep 2014
  • Loc: Canberra, ACT

Posted 16 December 2017 - 02:34 AM

Thanks for the post , very interesting .

 

Any chance of this being done using C9.25's as I have a pre Synta XLT and I would estimate it to be about as good as the best here .

 

I once owned a Takahashi M210 at the time of buying my C9.25 ( had the cash and was interested in seeing if the hype was true ? ) and tested them side by side visually many , many times on most objects using TV eyepieces  and the C9.25 is that good I sold the Takahashi .

 

Thanks in advance .

 

Beanerds .

On other posts here and other forums you mention you sold the Mewlon and Purchased the XLT afterwards.....so how could you have run "many, many" side by side tests ?????  

 

Here is a quote of what you recently posted:

 

"Actually on that I had a sweet Takahashi M210 that was so sharp you could have sworn you were looking through a quality APO but  , I hated the diffraction spikes it gave to bright objects , some don't mind but I don't like them so sold it to get my C9 , its 1/2 the price and 99% of the Tak's performance .

Brian "

 

So which story is the true one ??



#16 LewisM

LewisM

    Surveyor 1

  • *****
  • Posts: 1724
  • Joined: 23 Jan 2013
  • Loc: Somewhere in the cosmos...

Posted 16 December 2017 - 03:49 AM

 

Thanks for the post , very interesting .

 

Any chance of this being done using C9.25's as I have a pre Synta XLT and I would estimate it to be about as good as the best here .

 

I once owned a Takahashi M210 at the time of buying my C9.25 ( had the cash and was interested in seeing if the hype was true ? ) and tested them side by side visually many , many times on most objects using TV eyepieces  and the C9.25 is that good I sold the Takahashi .

 

Thanks in advance .

 

Beanerds .

On other posts here and other forums you mention you sold the Mewlon and Purchased the XLT afterwards.....so how could you have run "many, many" side by side tests ?????  

 

Here is a quote of what you recently posted:

 

"Actually on that I had a sweet Takahashi M210 that was so sharp you could have sworn you were looking through a quality APO but  , I hated the diffraction spikes it gave to bright objects , some don't mind but I don't like them so sold it to get my C9 , its 1/2 the price and 99% of the Tak's performance .

Brian "

 

So which story is the true one ??

 

It's OK, Brian is getting on in years...and he's a Kiwi, so it explains that bit grin.gif wink.gif



#17 punk35

punk35

    Apollo

  • *****
  • Posts: 1142
  • Joined: 26 Jan 2005
  • Loc: Adrian Michigan

Posted 16 December 2017 - 04:28 PM

How do I know if I have a pre or post Synta C11? Thanks.



#18 LewisM

LewisM

    Surveyor 1

  • *****
  • Posts: 1724
  • Joined: 23 Jan 2013
  • Loc: Somewhere in the cosmos...

Posted 16 December 2017 - 04:45 PM

Made in USA vs Made in China is a start :)

#19 glmorri

glmorri

    Sputnik

  • *****
  • topic starter
  • Posts: 30
  • Joined: 24 Feb 2013

Posted 16 December 2017 - 05:26 PM

I had the same question:"Trying to determine whether an SCT was of Synta origin wasn’t straightforward.    I decided that only the Celestron SCTs with Synta style back cell would be classified as Synta (see picture)."  (from the original post)

 

I've read that Synta was a supplier to Celestron before they acquired the company and eventually moved most of the manufacturing to China.  I decided to go with the point of OTA redesign, as shown in the above picture.



#20 Mark Harry

Mark Harry

    Cosmos

  • *****
  • Posts: 8224
  • Joined: 05 Sep 2005
  • Loc: Northeast USA

Posted 17 December 2017 - 09:40 AM

"An interferometer is chump change."
********
Excuse me???
   Who you trying to kid?
Price out a Zygo with a $15k reference optic, CERTIFIED to 1/20th wave SURFACE error or better. The homebuilt garage IF's are just toys.
  If you are going to make a serious objective test, "toys" should never be used, for any dispute could never be meaningfully defended from that side of things.



#21 Cpk133

Cpk133

    Apollo

  • -----
  • Posts: 1318
  • Joined: 14 Mar 2015
  • Loc: SE Michigan

Posted 17 December 2017 - 06:49 PM

"An interferometer is chump change."
********
Excuse me???
   Who you trying to kid?
Price out a Zygo with a $15k reference optic, CERTIFIED to 1/20th wave SURFACE error or better. The homebuilt garage IF's are just toys.
  If you are going to make a serious objective test, "toys" should never be used, for any dispute could never be meaningfully defended from that side of things.

So what do you think, how much?  I looked around and all I could find were used prices.  From what I deduced, chump change.  So do tell and I'll let you know if I change my opinion.  

 

Edit: Oh, I forgot, I also checked out John Hayes' setup where he used a phase cam to test a C14.  His estimate for all the equipment, $300K.  That doesn't change my opinion.  Keep it in context.  I'm talking in terms of the auto industry or other big industry.  The consumer telescope "industry" is teeny weenie, a boutique business.  


Edited by Cpk133, 17 December 2017 - 07:40 PM.



CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: catadioptric, Celestron, SCT



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics