Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

COMPARING THE MASUYAMA 25MM 52°, 25MM 65°, AND 26MM 85°

  • Please log in to reply
23 replies to this topic

#1 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 24 November 2017 - 09:45 AM

The modern incarnation of the Masuyama eyepieces seems to have carried forward their excellent reputation for providing high apparent contrast views. Indeed, for the globulars and nebula observed the Masuyama 85° quickly became my favorite during the testing, showing them brightly, richer in details than the other eyepieces, and with the largest contextual TFOV.

Click here to view the article
  • WRose, dpastern, Nikolas234 and 1 other like this

#2 junomike

junomike

    Hubble

  • *****
  • Moderators
  • Posts: 16293
  • Joined: 07 Sep 2009
  • Loc: Ontario

Posted 24 November 2017 - 10:40 AM

Another Great write-up Bill! waytogo.gifwaytogo.gif


  • tdeclue likes this

#3 Astrojensen

Astrojensen

    Voyager 1

  • *****
  • Posts: 11805
  • Joined: 05 Oct 2008
  • Loc: Bornholm, Denmark

Posted 24 November 2017 - 11:15 AM

Highly interesting! But I am surprised that the Masuyamas show such relatively obvious off-axis astigmatism, according to the review. Has anyone tried them in longer focal ratio telescopes, such as f/12 maksutovs, f/15 refractors or the like? 

 

 

Clear skies!
Thomas, Denmark


  • dpastern likes this

#4 ratnamaravind

ratnamaravind

    Mariner 2

  • *****
  • Posts: 294
  • Joined: 25 Dec 2015
  • Loc: San Diego, CA

Posted 24 November 2017 - 11:25 AM

Very good review, Bill. The couple I've looked through have been very good on-axis, but at the end of the day, the Masuyamas are hindered on off-axis performance by the 5/3 element design. 



#5 WRose

WRose

    Apollo

  • *****
  • Posts: 1470
  • Joined: 08 Jul 2005
  • Loc: Colorado, USA

Posted 24 November 2017 - 01:39 PM

Highly interesting! But I am surprised that the Masuyamas show such relatively obvious off-axis astigmatism, according to the review. Has anyone tried them in longer focal ratio telescopes, such as f/12 maksutovs, f/15 refractors or the like? 

 

 

Clear skies!
Thomas, Denmark

Yes, with an OMC 200 Mak &  AP 152 StarFire, f12
To some extent the off-axis distortion is lessened by the slower (longer) FL scopes but as ratnamaravind indicates, the design has certain inherent qualities.  EP designers have to balance the TFOV with the distortion qualities on any EP design.  They could have reduced the FOV and you just wouldn't see it but they chose to open the FOV and allow for some small edge distortion.

 

 

Thanks for the excellent review Bill!  Appreciate your time and effort bringing this together. 


Edited by WRose, 24 November 2017 - 01:43 PM.


#6 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 24 November 2017 - 02:21 PM

Highly interesting! But I am surprised that the Masuyamas show such relatively obvious off-axis astigmatism, according to the review. Has anyone tried them in longer focal ratio telescopes, such as f/12 maksutovs, f/15 refractors or the like?

 

Astigmatism is at times not readily seen, especially when the magnification is low.  This is one of the things about "testing" vs. "using" eyepieces and why while some eyepieces may test less than perfectly in use they may be considered quite good.  When testing one is critically examining performance using specific object types that accentuate the particular aberration most - like brighter stars for astigmatism.  But when you focus most of your attention on the observation as opposed to any aberrations in the eyepiece, many times the aberrations go unnoticed.  Why, in the end, each of us must judge for ourselves how impactful any test report results may or may not be for ourselves.  For me I liked the Masuyamas a lot and if I had disposable income would add them to the stall because they have some really nice attributes like an uncommonly beautiful on-axis.  I found the authority at which they rendered faintest star points most clearly as really something special.  When I was on target with my favorite open clusters, the Double Cluster, I found it quite moving how authoritatively they showed some of the faintest stars in those clusters, which I am most familiar with.  So with performance attributes like that, for me anyway, don;t care much about what is going on in the farther off-axis because what I am observing is dead center, where it should be for critical scrutiny.


Edited by BillP, 24 November 2017 - 02:22 PM.

  • doctordub, tdeclue and Tyson M like this

#7 Stargazer3236

Stargazer3236

    Soyuz

  • *****
  • Posts: 3558
  • Joined: 07 Aug 2010
  • Loc: Waltham, MA

Posted 25 November 2017 - 07:10 AM

What are the Masuyama eyepieces going for these days? Are they similar to Televue EP's in price?



#8 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 25 November 2017 - 12:57 PM

26T5 82 degree Nagler - $615

26mm 85 degree Masuyama - $280

 

Nagler better corrected off-axis as should be expected.



#9 ThomasM

ThomasM

    Viking 1

  • *****
  • Posts: 592
  • Joined: 19 Apr 2009

Posted 25 November 2017 - 01:03 PM

Very nice review. I hope that some day there will be a 1.25" Masuyama 20 mm 85 degree . What do you think, will that happen?

 

Thomas

 

p.s. what is  the field stop diameter of the 26 mm 85 degree eyepiece?


Edited by ThomasM, 25 November 2017 - 05:17 PM.


#10 areyoukiddingme

areyoukiddingme

    Skylab

  • *****
  • Posts: 4085
  • Joined: 18 Nov 2012

Posted 25 November 2017 - 02:03 PM

The Masuyama performance sounds rather similar to the Meade QX 26mm I once had.

 

Makes me wonder how well the Masuyama would perform in 'perceived contrast' if the name on the side was Meade.


  • EuropaWill likes this

#11 Jawaid I. Abbasi

Jawaid I. Abbasi

    Gemini

  • *****
  • Moderators
  • Posts: 3031
  • Joined: 19 Jun 2007
  • Loc: LEVITTOWN, PA

Posted 26 November 2017 - 09:01 PM

Bill,

Beautifully written without chocking or breaking the article and execute the way that the reader preceived him/her self being with the author conducting review.

Thank you as always



#12 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 27 November 2017 - 10:50 AM

The Masuyama performance sounds rather similar to the Meade QX 26mm I once had.

 

Makes me wonder how well the Masuyama would perform in 'perceived contrast' if the name on the side was Meade.

 

Nothing wrong with Meade!  I love Meade products!!  I love ES products as well, but the Masuyama clearly bested it.  Only way to tell though for your Meade would be for you to take a Meade and Masuyama, and turn them towards the Triffid Nebula and the Swan Nebula 10-20 times or more over 8-12 weeks, then see which one showed dark lanes and dark features best the vast majority if not all of the times (FYI, the latter being true with this test report; I never report a finding unless it is repeatable in 90% or more of the observations over many weeks, so recommend you do the same).  And if the two eyepieces are of similar form factor, then likely would need to take the eyepiece out of the focuser to look at it to determine which eyepiece you just did the observation with as you will find this happens a lot in testing (FYI, happened a lot for me with the 26 Masuyama vs the 25 ES 68 with 2" converter attached).  Also recommend when you do that test that you record on voice recorder all field test results in real time as the observations occur, then when all done all the voice files can be reviewed and transcribed and assimilated to create a final written report.  Let me know when you plan to do the Meade QX vs. Masuyama test...look forward to it!


Edited by BillP, 27 November 2017 - 11:03 AM.

  • csa/montana likes this

#13 csa/montana

csa/montana

    Den Mama & Gold Star Award Winner

  • *****
  • Moderators
  • Posts: 100202
  • Joined: 14 May 2005
  • Loc: montana

Posted 27 November 2017 - 11:25 AM

Having several of the original Masuyama's, I very much enjoyed your report Bill!



#14 jrbarnett

jrbarnett

    Eyepiece Hooligan

  • *****
  • Posts: 30001
  • Joined: 28 Feb 2006
  • Loc: Petaluma, CA

Posted 27 November 2017 - 12:25 PM

The Masuyama performance sounds rather similar to the Meade QX 26mm I once had.

 

Makes me wonder how well the Masuyama would perform in 'perceived contrast' if the name on the side was Meade.

Well if the guts of the Meade-branded units were the same - nice polish made by folks who enjoy a healthy margin on their products, and invest some of those spoils in superior design, superior materials, superior QA and QC, then I would expect the Meade-branded unit to review identically.  Unfortunately several lines of China-made Meade eyepieces have been anything but that. 

 

I once received a Chinese made Meade 32mm Series 4000 Super Plossl *missing the eye lens*.  :foreheadslap:  Methinks Meade's declining reputation has been earned by misdeed and inconsistent sourcing and quality assurance, rather unfairly applied.  Dunno about the QXs though.  Not sure who the source is, GSO in Taiwan maybe?

 

Best,

 

Jim 



#15 areyoukiddingme

areyoukiddingme

    Skylab

  • *****
  • Posts: 4085
  • Joined: 18 Nov 2012

Posted 27 November 2017 - 04:01 PM

 

The Masuyama performance sounds rather similar to the Meade QX 26mm I once had.

 

Makes me wonder how well the Masuyama would perform in 'perceived contrast' if the name on the side was Meade.

 

Nothing wrong with Meade!  I love Meade products!!  I love ES products as well, but the Masuyama clearly bested it.  Only way to tell though for your Meade would be for you to take a Meade and Masuyama, and turn them towards the Triffid Nebula and the Swan Nebula 10-20 times or more over 8-12 weeks, then see which one showed dark lanes and dark features best the vast majority if not all of the times (FYI, the latter being true with this test report; I never report a finding unless it is repeatable in 90% or more of the observations over many weeks, so recommend you do the same).  And if the two eyepieces are of similar form factor, then likely would need to take the eyepiece out of the focuser to look at it to determine which eyepiece you just did the observation with as you will find this happens a lot in testing (FYI, happened a lot for me with the 26 Masuyama vs the 25 ES 68 with 2" converter attached).  Also recommend when you do that test that you record on voice recorder all field test results in real time as the observations occur, then when all done all the voice files can be reviewed and transcribed and assimilated to create a final written report.  Let me know when you plan to do the Meade QX vs. Masuyama test...look forward to it!

 

 

I have nothing against Meade (or Masuyama for that matter), I just don't like astigmatism. In fact, I'd go so far to say that obvious astigmatism starting relatively close to the center is a deal breaker for me.

 

The 26mm Meade (and Orion Q70 brand) are very comfortable to use, and they have pretty good on axis sharpness. But the astigmatism was too much to bear, so I sold one and gave the other away. My guess is that they have more astigmatism than the Masuyama, so I'd think the comparison would require little more than a few seconds of time to figure out.

 

Having said that, a double blind study would be fun if it were possible to exchange housings for Meade and Masuyama. We could then separate the effects of branding on perceptions from performance. And no need for the qualitative research--simple numbers would suffice. A small sample of observers, and we could conduct a signal detection analysis.



#16 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 27 November 2017 - 08:55 PM

 

...a double blind study would be fun if it were possible to exchange housings for Meade and Masuyama. We could then separate the effects of branding on perceptions from performance. And no need for the qualitative research--simple numbers would suffice. A small sample of observers, and we could conduct a signal detection analysis.

 

 

Unfortunately none of this would really tell you anything.  The housing internals are integral to various performance parameters of an eyepiece (i.e., baffling, blackening, internal housing shapes, precision alignment of optics, etc.).  And given that one cannot change the look of the eye lens or the feel of the ER, you can never really blind the experiment well enough unless you use folks who are complete newbies with no fore knowledge about the brands.  And if you did use newbies then their observation skills are nothing so their results would be meaningless to the average amateur with a few years under their belts.  And after all this trouble to properly blind the experiment, which means encasing all eyepieces in a generic form factor "over" their existing one, meaning defeating any superiority in eye guard ergonomics or eye relief one may have over another which people want to know, to get statistically valid results you would need quite a number of observers of various skills and health profiles and genders and scores of each eyepiece type to eliminate production variations.  So really, any "proper" double blind experiment related to an eyepiece comparison would be quite a massive undertaking and cost quite a bit of time and money.  And in the end you would have results for maybe one optical parameter like did it look brighter or did it look more contrasty (they would all have to be trained on what that means so they are reporting to a standard and not their impression btw).  Basically you would get much less information for what would amount to hundreds of times the effort to conduct.  I often hear folks chime about doing double-blind, but to do such a thing is really impossible without funding, especially if they would all have to evaluate the myriad of criteria readers of reviews want.  A massive undertaking.  And I know this may sound extreme, but you know what?  If you do not do the double-blind study exactly correctly, taking all these precautions and using a statistically proper population and number of eyepieces, then all the results become worthless because of the bad design or inability to produce statistical significance.  So for me, no thank you; qualitative field tests tell much more of a story.


Edited by BillP, 28 November 2017 - 08:33 AM.

  • csa/montana, desertlens and rogeriomagellan like this

#17 areyoukiddingme

areyoukiddingme

    Skylab

  • *****
  • Posts: 4085
  • Joined: 18 Nov 2012

Posted 28 November 2017 - 03:55 PM

While I agree with many of your sentiments about the difficulty of conducting things, I think the possibility of double blind comparisons can be much more tractable than you argue.

 

The comparisons in a double blind study are repeated measures--the same participant is subject to all treatments, treatments are independent (though sources of bias should be decreased as you point out), and that means so long as we have a reasonably representative sample, we do not need to control for age, gender, eye sight etc. Participants serve as their own controls.

 

The statistical power of the test is not just determined by sample size, it is also critically determined by the size of the experimental effect. It would not be difficult to find 20-30 astronomers who would be interested in participating in such a test, and my seat of the pants power analysis suggest that this would probably be sufficient for basic analyses.

 

Also, as for the biases, while eyepiece casing certainly will vary, I suspect a lot of your concerns could be allayed with careful choice of eyepieces. While it certainly would be difficult as a routine approach, it might be attempted in other ways.

 

For example, it would be possible to devise a simple cloth barrier around an eyepiece, and procedures for participants to approach the eyepiece without inspecting it, only opening their eye once an experimenter has placed them in position for viewing. Following that protocol, it would not even be necessary to swap eyepiece cases. That design would be a standard "pepsi-challenge" .

 

Two variations on the experiment would be one where no information is provided about the eyepieces, and a standard battery of questions could be devised--perceived contrast, ability to get the exit pupil, aberrations, and an overall score out of 10.

 

It would also be possible to test whether people are able to correctly guess the make/model of the eyepieces, and this information can be statistically controlled, or participants dropped who have intimate knowledge of a particular eyepiece. I bet most people will not be able to accurately infer the focal length of the eyepiece, the brand, and I bet there's a good bit of inaccuracy in the size of the apparent field.

 

A second variation could be to experimentally manipulate the eyepiece brand with the same protocol. Tell people that they are comparing X vs. Y, but do not specify which is which. That would allow testing for branding effects independent of performance effects.

 

The best participants in that case would be 'intermediate' astronomers who have developed viewing skills and the ability to spot eyepiece aberrations, but who are not necessarily familiar with every eyepiece known to man.

 

And all of that being said, we don't need to stop enjoying qualitative analysis like found in your excellent, and enjoyable review. The approaches are complementary, not competing.

 

Having said all of that, I do agree with you about the difficulty of doing these tests compared with your approach. While you have obviously put a huge amount of time and effort into your comparisons and write up, that is relatively tractable as it is clearly a labor of love.

 

Herding 20-30 cats and having the buggers follow instructions--especially men--is a serious challenge.'

 

Anyhow, one of these days I will put my money where my mouth is and do one of these kinds of tests. It should be a fun project that could be done with my local club as participants. If sufficiently clever, it may even be possible to get a publication out of it.



#18 starcam

starcam

    Apollo

  • *****
  • Posts: 1030
  • Joined: 24 Sep 2007
  • Loc: MD

Posted 28 November 2017 - 11:48 PM

Thank you Bill, very enlightning.

I find it disconcerting that a lot of people only think that an eyepiece is only good if it has perfect off axis. The televue syndrome.

Not that the ones with astigmatism off axis has better contrast and light transmission in the center axis where you frame the object.

Also the same guy telling you that you don't know how to test eyepieces, over and over after you do one of your reports. Then he should do it, as he thinks it's so easy. It seems there's always several people want to shoot you down, to make a name for themselves. They can do it better but never do syndrom.


  • csa/montana and jjack's like this

#19 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 04 December 2017 - 03:20 PM

Thank you Bill, very enlightning.

I find it disconcerting that a lot of people only think that an eyepiece is only good if it has perfect off axis. The televue syndrome.

Not that the ones with astigmatism off axis has better contrast and light transmission in the center axis where you frame the object.

Also the same guy telling you that you don't know how to test eyepieces, over and over after you do one of your reports. Then he should do it, as he thinks it's so easy. It seems there's always several people want to shoot you down, to make a name for themselves. They can do it better but never do syndrom.

 

Thanks Steve.  And yes, a bunch of us when we talk always make your point that the people who seem so critical of testing at the same time never bother to do any testing to publish either.  So true, if they want a different approach then they are more than welcome to put in all the effort doing it their way to share with us all.  With set up time, observing time, break down time, logging observations, harmonizing observations with re-tests if an inconsistencies, then drafting and finalizing the article with photos it is easily 120 hours of solid work!  So that's three weeks of full time 9-5 work!  If I were to do a double-blind one correctly the setup and masking would take lots ATM time, then corralling and training lots of folks, etc., etc., would probably be easily 5-10 times the effort.  I am not against the idea of a double-blind test, just that it would be a grand effort.  So if my work rate is $50/hr and a double-blind would probably take me at least 600 hrs or more total, then who wants to fork over the estimate?  That would be $30,000!  I'm sure not putting that amount of effort into something for free!!  First step in the process would be the test design and that would take at least 40 hrs to nail down into a proper paper that could then be distributed for a peer review.  A lot of work!

 

And yes...1000% agree on the off-axis is not to do-all and end-all.  I know from my testing and experience that many of the eyepieces with stellar off-axis often are beat by others at the on-axis in one way or another.  And many of the perfect off-axis wares seem to suffer from a less than friendly exit pupil design or poor eye relief.  I've tried most all eyepieces out there...so far no one has come close to designing and executing what I would call a "perfect" eyepiece which would have 1) a very crisp and resolute on-axis with superb contrast and focus snap, 2) tight star points to the field stop with no visible lateral color, 3) completely uniformly dark background FOV, 4) very comfortable eye relief as measured from the top of the housing/eye guard, 5) no sensitivity to eye placement relative to either blackouts or impacts to how controlled the view is maintained, 6) minimal RD so FOV is close to orthoscopic, 7) less than the typical scatter around stars and planets that I see on most eyepieces (ZAOs are an example of how minimal scatter should be), and of course 8) zero unwanted light artifacts.  Numbers 3, 5, 7 IMO are the things all makers seem to have the hardest with.


  • csa/montana likes this

#20 BillP

BillP

    Hubble

  • *****
  • topic starter
  • Posts: 18518
  • Joined: 26 Nov 2006
  • Loc: Spotsylvania, VA

Posted 04 December 2017 - 03:48 PM

While I agree with many of your sentiments about the difficulty of conducting things, I think the possibility of double blind comparisons can be much more tractable than you argue.

 

Possibly.  It definitely is doable, but my biggest point is that it will take a lot of work to put together, and time is money!  Some of the most obvious confounding variables to think about are of course: atmosphere, base equipment, test equipment, and the testers. 

 

Any tests would have to be repeated quite a number of times to help mitigate variances in seeing.  Easy enough to do when one person is doing it all but to get a corral of equipment and people to repeat a single observational test 10 times or more over several evenings would be quite an undertaking, especially when the total number of things one looks for in testing an eyepiece is a lot (Scatter FC, RD, Astig, Lateral Color, AMD, Multiple Light Artifacts, Star Color Rendering, dimmest star performance, dimmest extended object performance, contrast tests, just to name a few). 

 

Then think about how if one is going through all this effort with as you say 25 people, should probably have 2 scopes to make more efficient use of time which means would need to get bench test reports on the two optics to ensure equivalence and also verify each is completely thermally acclimated the same (which means FLIRs and such to take readings).

 

Now with these 25 people, they would all have to be trained on how and what to look for with each test.  So can leave nothing up to assumption that they will know how to look for say astigmatism.  Everyone must conduct the test uniformly so would have all that training time to show them what it looks like, how to perform it, and make sure they are all conducting the test uniformly.  Would also be helpful to have a recent eye sight exam for each person and ensure they are hydrated well, rested well, and uniformly acclimated to dark adaptation before the experiment begins.

 

With the test equipment, if we are comparing 2 different eyepieces let's say, we do need more than just one of each because there are production differences so we would need several of each.  Now statistically we would probably need 20 or so of each.  But let's shortcut and get say 5 of each.  So that is 10 eyepieces total, of the two brands being tested.  OK...if we do an off-axis test for astigmatism, then each person needs to do that test 10 times because there are 10 eyepieces.  Of course we will need to do multiple tests to ensure it is not a one-off so I like repeating a test at least 10 times, so that means each person will test for astigmatism 100 times total.  Then same with all the other tests.

 

With all these eyepieces also means we will have to mask them all, and also get a separate bunch of people to assist the testers and record their observations.  So if we have 2 scopes then would need 2 staff to dedicate to taking their observation notes and 2 staff to dedicate to running the test.  Remember that those running the test can not know anything about the eyepieces since they need to be blinded as well.  So a separate group of staff would have to prepare all the masking of the equipment and setting equipment out for the blinded staff to take to give to the test subjects.

 

Now in masking the equipment, this becomes a most difficult task IMO.  Since we do want real amateur astronomers and not newbies, they will know lots of stuff about eyepiece brandings.  So basically would really have to completely encase an eyepiece in some sort of enclosure that only shows a peep hole onto the eye lens and nothing else.  No part of the upper housing or eye guard can be visible.  And of course all eye lens holes in the mask would have to be the same.  Would even have to ensure that the height of the eye lens above the diagonal was uniform as cannot have people seeing one shrouded eyepiece is taller than another.  So they can see nothing that will allow them to form an opinion that let's them think they can identify any differences physically.  We would also have to ensure that none of the testers talked to each other during the test or afterwards until all tests were completed.

 

Now we all know that even if one person is doing all the testing that it can never be done in a single evening.  So given all these folks and all this equipment and all this prep to be done, even with just 2 brands of eyepieces to compare we'd have to corral all these folks out for extended viewing on many many evenings.  Given the weather and peoples work schedules and such how long do you think it would take for this double blind?  I'm thinking many months jsut to get all these folks out in the field maybe 4 or 5 times.

 

Hmmm....maybe maybe this should be full time work for whoever will be running it.  I think either a Kickstarter or a Gofundme page should be the start of this effort lol.gif

 

ps - thinking about this more, since going to all this trouble, I would tell the testers they are testing 3 eyepieces.  However, I would just substitute one of the brands giving it to them twice and just saying it is a 3rd brand.  Reason is I would want to see how much variation they reported between the two identical eyepieces they thought were different.  In a way, that would be a control to highlight the existence of any unknown/unanticipated confounding variables or possible some flaw in the base design of the experiment.


Edited by BillP, 04 December 2017 - 03:57 PM.

  • csa/montana and dpastern like this

#21 csa/montana

csa/montana

    Den Mama & Gold Star Award Winner

  • *****
  • Moderators
  • Posts: 100202
  • Joined: 14 May 2005
  • Loc: montana

Posted 04 December 2017 - 09:28 PM

applause.gif applause.gif applause.gif 



#22 dpastern

dpastern

    Apollo

  • -----
  • Posts: 1241
  • Joined: 01 Jan 2009
  • Loc: Brisbane, Australia

Posted 11 December 2017 - 05:42 AM

Excellent review Bill - thank you.  

 

Do you think the contrast differences between the Masuyama and ES eyepieces is simply down to coating quality, or the optical design, or a combination of both?



#23 RichA

RichA

    Apollo

  • *****
  • Posts: 1025
  • Joined: 03 Jun 2010
  • Loc: Toronto, Canada

Posted 27 December 2017 - 02:39 PM

Interesting, the colour of the coatings in these eyepieces.  Similar to the old Ultima Celestron eyepieces, or some Zeiss optics.  Whereas most optical components we see today from China or Taiwan seem to have the uniform greenish coating reflections.  I wonder if it's an indication of the number or thickness of the multicoatings?



#24 Luca Artesky

Luca Artesky

    Vendor - Artesky

  • -----
  • Vendors
  • Posts: 28
  • Joined: 23 Aug 2016
  • Loc: Milan, Italy

Posted 14 March 2018 - 04:08 AM

Amazing review Bill!

 

Can I translate it into Italian and publish on my website with all the reference ?




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics