Jump to content


Photo

Small Binoculars - The Score

  • Please log in to reply
24 replies to this topic

#1 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 02 March 2008 - 09:48 AM

Included in this list See attachment above are 34 binoculars tested over a period of 6 months. The project started out to test a variety of 8x and 10x binoculars under $200. Eventually it grew to include 7x50s 8x40, 8x42, 10x42, 10x50 12x50, a small collection of roof prism models and even a few uncommon sizes. Also the list grew to include some very important benchmark binoculars as it became evident that in order to rank all these binoculars it was important to know what the quality range of performance was in each category. Other than the benchmark binoculars, (Nikon SEs, Fujinon FMT-SXs) every other binocular in this list cost less than $300. A few models cost less than $100.

I rank about 15 different aspects of each binocular. Several are mechanical and several are optical. All of the mechanical properties I rated on a simple 1-3 score basis.

For some of the optical properties, score is based on a formula that takes into consideration such things as importance to the overall view, variance from spec, variance above or below the mean. For instance, there is a great deal of discussion about coatings. True, coatings are important, but I rated coatings subjectively on a simple 1-3 basis. However, I felt that the tests for transmission of light were better reflected in the values I obtained for internal vignette and illumination, both of which have a definitive measured value and a calculated score. You can have what look like the best coatings in the world, but if the binocular system does a poor job at delivering all that light to the exit pupil, scoring coatings high doesn't do you much good. Also, by reducing the weight given to the subjectiveness of looking at coatings, I include a more measurable objective score.

There are several aspects of binoculars measured and included in this report that I have not seen measured anywhere else. Some appear to me to have a significant impact on the performance of the binocular system. It is my opinion that these are very important to the overall measure of performance and therefore they are included here. For instance, while often we see measures of exit pupil, sometimes actually measured and other times simply reported based on Aper/mag, few binocular studies go to any length to verify magnification and then verify effective aperture. This study does so. Also, many studies indicate that field sharpness has been checked and it is sometimes reported as the outer 1/3 of fov or from about 60% out, but few if any others actually measure distortion in the fov. This study does. None that I know of actually take measures of internal vignette for an off-axis beam, showing tilt in the optical axis and percentage of rays actually reaching the exit pupil. And even fewer still use limiting magnitude to attempt to confirm the effects of vignette and illumination.

Binocular aperture was measured and verified by three different methods. Scores were punished for those binoculars that did not measure up to specified aperture by a shortfall of more than 1-2mm. A binocular may score very well in many categories, but if it is supposed to be 50mm aperture and the aperture measures only 44mm, it looses a lot of points in the aperture category and gets knocked down in overall rank.

Often we read claims of field sharpness that are grossly exagerated. There are actually only very few binoculars that are "sharp to the edge", and nearly all of them are in the class of "benchmark" binoculars. In this report, field sharpness, the collective presentation of all aberration across the outer fov, is actually definitively measured to position and degree. Sharpness across the field of view scores benefit when both Afov is wide AND field sharpness is very good, two qualities that seldom appear in a binocular together. Quite often we see a very sharp fov in a binocular that has a near orthoscopic view. That's much easier to do than it is to get that sharp fov in a wide-angle view. So in this report we may have two binoculars that both have equal field sharpness at 75-80% out, but one binocualr accomplished it with a 64° Afov and the other with only a 52° Afov. The wider Afov would score much higher.

Some categories take into consideration several different measures before assesment for scoring that category. For instance, Focus. To score focus, I considered easy of reaching and moving the focus dial, stability (did it ever move after I set focus), close focus distance, does either side not reach perfect focus? I had measured a great deal of data on slow focusing and fast focusing, but considering that slow is more desirable for astronomy and fast is more desirable for terrestrial, did not include it in the score. Another is eyecups. Are the eyecups comfortable, do they shield the eye well, do they provide the proper distance so there are no blackouts, do the permit seeing the entire fov, were there several settings, did they move once set? Eye relief is yet another. Not only is it important to have sufficient eye relief, but also it is very important for eyeglass wearers that the eye lens be recessed deep enough so that eye glasses do not touch either the metal rim or the lens. A few binoculars lost points for that.

Resolution is considered very important. I recorded three different measures for resolution; normal power resolution, 6x boosted power resolution, and normal power handheld resolution. Then not only do I look at single base arcseconds resolution, but also apparent resolution so all different powers of binocular can be compared to each other. I put more weight on normal power resolution and much less weight on boosted resolution, simply for the reason that boosted values can never be achieved in actual use. While boosted res tells me something about the perfomance of the system, in practice it is normal power that gives the view.

I tested binocular magnification. Although that value is not scored, it is very important to one method of testing aperture by measurement of exit pupil and it is of the utmost importance to measuring resolution. Comparative resolution vlaues are all factored by the magnification at which the reading is obtained and without a true measure of magnification, that comparative measure cannot be done.

Interesting to note, for the testing of magnification, even the tester needed to be tested. A small magnifier used to test both magnification and boosted resolution could not be taken at face value for the little monocular stated magnification. Indeed, it was actually higher than stated. After determining the actual magnification of the monocular tester, it was taped permanently in that position and has been at that position ever since.

The scoring for the most part does not discriminate against smaller sizes in preference of larger sizes. However, it is important and not ignored completely, and consideration is taken to give some credit where larger size can result in better performance. For instance, a larger size would be expected to produce a finer resolution or a deeper limiting magnitude. If all scoring were based on ultimate performance, all the largest binoculars would be at the top of the scores and all the smallest would be at the bottom. In categories where this would have occured, the score is given some weight for overall performance and greater weight for performance as related to the binocular size group. This can be seen to perhaps effectively have leveled the sizes in that there are a number of 8x40/42s near the top of the scores.

For terrestrial use, I rated pincushion and depth of field. Strong pinchusion gets a low rating. Good depth of field is almost always associated with significant field curvature. For astronomy use, viewers might be inclined to choose a binocular with a flat field, which would almost always be associated with a lower depth of field score. However, more important for astronomy is not just the influence of curvature, but the collective distortion of all aberrations across the fov, and this is scored in field sharpness.

Most of the optical scores are measured and reported objectively. There is little room for the influence of subjective input. Either the binocular can see the test object or it cannot. However, there is some room for subjective influence in some of the scoring, such as fit and feel type properties. For this reason, scores should not be taken as a definitive value specifically placing one model directly above or below the other. Scores should be considered in the view of the range in which a particular model ended up. It may certainly be reasonable that a particular model could move 1 or 2 points up or down. It would be highly unlikely that any binocular scored in this list would move 4-5 points up or down.


Finally I'll add, I refrained for a long time from posting these scores, for several reasons. They include; adding missing pieces of data, refinement of measurment methods, refinement of scoring method, retesting of models that seemed to be askew in the data (in very few instances did they change), disputes about measuring methods (some of those got worked out based on discussions during the 6 month test period) and critism from the masses for not seeing favorites where they might hope they would end up. Well, this list represents the most fair process and unbiased reporting that I could come up with. Results are what they are. Nothing in this list is purposely skewed. I hope you all find this information useful.

A few things I discovered throughout the course of this study:
8x40, 8x42 and 10x42 can perform very well for astronomy. This size provides a very good cross-over choice for terrestrial use.
robustness is desirable, but is not always an indicator of good performance.
some roof prism binoculars in both 8x42 and 10x42 sizes perform very well for astronomy.
roof prism models can be better, porro prism models can be better. IMO, it is not the design, it's the quality.
specifications can often be misleading.
most quality is not apparent on the outside.
there is no one best size.
there are a lot of surprises along the way to discovery.

Legend
Build = build quality
Er = eye relief with and without glasses, eye lens depth,
Cups = eye cups design and function, cups position to eye relief
Focus = focus reach, function, ease, and stability, close focus
Diop = diopter adjustment, ease of use, single and individual focus
Coat = coatings quality (reflectivity) for lenses and prisms
Pincu = pincushion, higher score for less pincushion
Dof = depth of field, lower dof often associated with flat field
Res = normal resolution and boosted resolution, USAF charts
Reshh = normal resolution hand held
Fov = true field of view and Apparent fov, +/- from specified
Sharp = Outer field sharpness, % fov usable
Vign = internal vignette, axial tilt, but not aperture stops
Illum = Illumination of the Exit Pupil, light meter test
LM = Limiting magnitude of faintest stars
Aper = reduction in aperture due to mis-sized baffles, prism stops and prism cutoff

Mech = Lower importance score of mechanical functions, build quality
Optic = Higher Importance scores of optical aspects and performance
Total = Total Score
% = percentage of maximum possible score
Cost = cheapest available cost I found to purchase these items.


Links to Small Binocular Tests and Pictures

edz

Attached Files



#2 Luigi

Luigi

    Fly Me to the Moon

  • *****
  • Posts: 5320
  • Joined: 03 Jul 2007
  • Loc: MA

Posted 02 March 2008 - 10:34 AM

Excellent work and thanks for making your results pubic. Another excellent comparison was done at Cornell WRT bird watching. It's at:

http://www.birds.cor.../Age_Binos.html

#3 ngc6475

ngc6475

    Fearless Spectator

  • *****
  • Posts: 5026
  • Joined: 02 Mar 2002
  • Loc: 38°21'N 120°55'W

Posted 02 March 2008 - 11:38 AM

Whew! What a job! Thanks for the excellent report, Ed! :bow:

#4 alins

alins

    Explorer 1

  • -----
  • Posts: 59
  • Joined: 23 Feb 2008
  • Loc: Seattle, WA

Posted 02 March 2008 - 01:10 PM

Thank you very much Ed. A lot of hard work went into this obviously.

A legend for the test report would be nice (e.g. what is "score er" or "score LM"). I am a newcomer as you know, so apologies if these are standard abbreviations understood by all.

Also what do the colors mean? (in the chart itself, the red and green, and in the bino list, the color grouping).

Finally, can you please release the raw numbers in a text or Excel file? People can then use these for their own (different) formulas and rankings.

Thank you again

#5 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 02 March 2008 - 01:58 PM

A legend for the test report would be nice

Also what do the colors mean?

Finally, can you please release the raw numbers in a text or Excel file? People can then use these for their own (different) formulas and rankings.

Thank you again


I'll provide a legend.

the master filee has conditional formatting. Certain amounts above or below the mean trigger either red for poor or green for very good

I will not release the raw data for people to generate their own formulas and rankings.

edz

#6 alins

alins

    Explorer 1

  • -----
  • Posts: 59
  • Joined: 23 Feb 2008
  • Loc: Seattle, WA

Posted 02 March 2008 - 02:03 PM

I will not release the raw data for people to generate their own formulas and rankings.


I did not mean the raw measurement numbers from your tests, just the numbers already released in the PDF file. For example somebody might want to rank based on resolution only. Having the numbers already released means you can rank them easily in Excel in different ways.

#7 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 02 March 2008 - 02:18 PM

All of this information is prereleased here as part of the contents for a CN Report. Without the raw data, people have a tendency to attempt manipulating data in such a way that sometimes produces false results. I will reserve the right to the data and will publish my own results.

All the data is there for anyone to easily see the results you request.

edz

#8 alins

alins

    Explorer 1

  • -----
  • Posts: 59
  • Joined: 23 Feb 2008
  • Loc: Seattle, WA

Posted 02 March 2008 - 02:40 PM

In my humble opinion, the benefits of releasing the full data far outweight any negative consequences of a few "people manipulating them to produce false results". Such open discourse and release of information is in fact the norm in scientific and technical circles.

Now, I fully understand if you are holding the data back for a to-be-published official CN report, and the data released so far are just meant as a preview and point of discussion in an informal forum. I hope that you do release the full data in that future report.

My 2 cents and last words. Thanks again.

#9 ronharper

ronharper

    Vanguard

  • *****
  • Posts: 2205
  • Joined: 14 Feb 2006

Posted 03 March 2008 - 01:26 AM

Ed,
Would it seem unwarranted to infer that you rather enjoy looking though binoculars? Us too, for sure, but this report is something else again.

The amount of critical observation, and quantification, contained in this report is awesome. We who know from your astronomical reports that your vision is keen, and know your passion for accuracy, will regard this as the state of the art.

While your report is a delight for today's consumer and enthusiast, I think it could achieve more. Long after these binos have ceased production, and the eyes of all of us present here have faded with age, your methods and obvious love and care for this pursuit should live on among those who are stimulated by the powers of two-eyed optical aid. There is a danger, however, that it will be lost in the increasing maze of web info. There is so very much said on the web concerning binoculars. But the number of books is probably less than twenty. You know you really ought to think about a book. Sorry, and just when you wanted to relax!

And, when do we expect the report on large binoculars?

Please get to work, OK?
Ron

#10 avare

avare

    Sputnik

  • -----
  • Posts: 34
  • Joined: 07 Aug 2007

Posted 03 March 2008 - 01:46 AM

I don't what to write. Thank you Ed

Andre

#11 WRose

WRose

    Apollo

  • *****
  • Posts: 1377
  • Joined: 08 Jul 2005
  • Loc: Colorado, USA

Posted 03 March 2008 - 02:43 AM

Here, Here Ron - I agree whole heartedly!

Superb report Ed, thank you! :goodjob: (great job actually :bow:)

#12 Dave Hederich

Dave Hederich

    Ranger 4

  • -----
  • Posts: 396
  • Joined: 12 Sep 2007

Posted 03 March 2008 - 09:28 AM

A book wouldn't really be as difficult and time-consuming for Ed if he simply compiled and edited all of the text he has written to date. It would still require a good deal of effort, but nowhere near what it would take to write a book from scratch.

#13 dOP

dOP

    Explorer 1

  • -----
  • Posts: 78
  • Joined: 07 Dec 2007
  • Loc: Portugal

Posted 03 March 2008 - 10:44 AM

Well, this is a great amount of interesting work. Well done.

I just have one observation to make:

Looking at the results, it seems to me that there are some inconsistencies. Let me give one example:

Overall, the Nikon AE 12x50 scores 79% while the Nikon SE 12x50 scores 82%. That's only a 3% difference between these two models that definitively doesn't make justice to the perceived difference that I (and probably the majority of the members) have between them. I could be wrong, but in fact the retail price supports what I'm saying, there's a $750 gap between them, the AE costs $158 and the SE $900, looks like Nikon agree with me.

Keep in mind that I am, by no means, trying to raise the least amount of doubt about your measurements; in fact, I have them in highest regard. I think this is all about the weight you decided to give to each one of the individual scores. The SE scores 5.0 vs. AE's 4.0 in LM, 4.4 vs. AE's 4.0 in resolution, 4.4 vs. AE's 2.8 in sharpness, 3.0 vs. AE's 2.0 in coatings. These are the most important optical qualities (at least to me) and yet they're completely offset by the slight worse diopter adjustment (SE's 2.5 vs. AE's 3.0), eye relief (SE'2.5 vs. AE's 3.0) or even eye cups (SE's 2.0 vs. AE's 2.5).

That $750 difference is precisely in the coatings, resolution and sharpness! (and some other things)

#14 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 03 March 2008 - 12:02 PM

That $750 difference is precisely in the coatings, resolution and sharpness! (and some other things)



You may find it interesting that the Nikon SE 12x50 just barely outperforms the Nikon AE 12x50 in resolution tests and internal vignette. The Nikon AE outperforms in the illumination test. Yet the SE has better contrast.

The SE is notorius for it's too long eye relief and this has been noted by any number of users over the years. I find eye distance placement very difficult in the SE, but placement is no issue at all in the AE. yes the SE is one of those few binoculars that can truely be considered sharp to the edge, but the AE is not very far behind.

The AE performs nearly as well in Limiting Magnitude, but not quite as well in low contrast objects.

Had you selected an average performing binocular to compare what you consider as inconsistences and shown near equivalent total scoring, I might agree. But the Nikon AE 12x50, outside of the benchmark binoculars, is the finest performing instrument in this list. However, you are correct that inconsistnacies are present, so it is worth repeating this, which I posted in my explanation of the report

Most of the optical scores are measured and reported objectively. There is little room for the influence of subjective input. Either the binocular can see the test object or it cannot. However, there is some room for subjective influence in some of the scoring, such as fit and feel type properties. For this reason, scores should not be taken as a definitive value specifically placing one model directly above or below the other. Scores should be considered in the view of the range in which a particular model ended up. It may certainly be reasonable that a particular model could move 1 or 2 points up or down. It would be highly unlikely that any binocular scored in this list would move 4-5 points up or down.


Finally, let me add, this report is generated from approx. 100 pages of notes, nearly 3000 bits of input data, nearly all of it measured, and nearly 2000 bits of output, some manually scored, but most caculated. Total number of observations far outnumbers the number of input data points. I'm sure there will be more occasion to point out more inconsistancies. Frankly, they can't possibly all be eliminated.

edz

#15 Mark9473

Mark9473

    Cosmos

  • *****
  • Posts: 8554
  • Joined: 21 Jul 2005
  • Loc: 51°N 4°E

Posted 03 March 2008 - 02:05 PM

That is a great result, EdZ. The table format works just fine. A superb resource - too bad I'm not in the market for a 10x binocular.

Just out of curiosity I looked up again the ranking produced by the Polish binocular reviewers who used to participate here in the past:
http://www.optyczne....etek-10x50.html
Interesting to compare...

#16 dOP

dOP

    Explorer 1

  • -----
  • Posts: 78
  • Joined: 07 Dec 2007
  • Loc: Portugal

Posted 03 March 2008 - 02:11 PM

Had you selected an average performing binocular to compare what you consider as inconsistences and shown near equivalent total scoring, I might agree. But the Nikon AE 12x50, outside of the benchmark binoculars, is the finest performing instrument in this list.



Yes, there are other examples. I chose these two because they're both from Nikon, same aperture and magnification.

Anyway, I didn't know that the "12x50 Action Extreme" was such a good performer. Considering the price, it looks like they are the best buy in your list.

#17 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 03 March 2008 - 03:29 PM

That is a great result, EdZ. The table format works just fine. A superb resource - too bad I'm not in the market for a 10x binocular.

Just out of curiosity I looked up again the ranking produced by the Polish binocular reviewers who used to participate here in the past:
http://www.optyczne....etek-10x50.html
Interesting to compare...


Very interesting. Four 10x50s, the Nikon Action extreme, Pentax PCF WP, Leupold and Bushnell Legend are ranked very similar in both our studies.

They rank the Legend considerable lower than I did. However, it appears they may be taking a double reduction for the intrusion of the prisms into the light path. they take points off in both their exit pupil score AND their aperture score. I simply calculate the area lost and deduct it to get effective aperture. So, given that, our assessmnet of the problem in the Bushnell is identical, but I deduct points only once.

Also, as I did, they rank the Fujinon 10x50 extremely high, exceeded only by a Swarovski SLC.

edz

#18 hallelujah

hallelujah

    Fly Me to the Moon

  • *****
  • Posts: 5026
  • Joined: 14 Jul 2006
  • Loc: North Star over Colorado

Posted 03 March 2008 - 04:21 PM


Also, as I did, they rank the Fujinon 10x50 extremely high, exceeded only by a Swarovski SLC.

edz


According to the point system both were rated the same, the Swarovski had the higher cost (cena).

#19 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 03 March 2008 - 05:08 PM



Also, as I did, they rank the Fujinon 10x50 extremely high, exceeded only by a Swarovski SLC.

edz


According to the point system both were rated the same, the Swarovski had the higher cost (cena).


a good example that a binocular which cost only 1/3rd as much can perform equally as well.

#20 Rick

Rick

    Gemini

  • *****
  • Posts: 3291
  • Joined: 12 Apr 2005
  • Loc: Tokyo, Japan

Posted 03 March 2008 - 06:50 PM

Great job Ed as always :waytogo:

clear skies,
Rick

#21 hallelujah

hallelujah

    Fly Me to the Moon

  • *****
  • Posts: 5026
  • Joined: 14 Jul 2006
  • Loc: North Star over Colorado

Posted 03 March 2008 - 09:13 PM

a good example that a binocular which cost only 1/3rd as much can perform equally as well.



Yes, it is a very good example, expecially when we consider the fact that a $660 Porro can "hold its own" against a top notch $1,800 roof prism like the Swarovski SLC. :waytogo:

I mention that because there are some who think that Porro prism binoculars are doomed to extinction, just like the dinosaurs.

#22 Eliscope

Eliscope

    Lift Off

  • -----
  • Posts: 1
  • Joined: 28 Jan 2008

Posted 12 May 2008 - 10:01 PM

Ed:

I'd love to see a version of your results tabulation that could be manipulated by the user, in terms of which scores were used in the final % ranking. Some binocular characteristics, of course, are more important to some, less important to others. People in the market for binoculars could see how the rankings changed when they eliminated from considerations one or more things they felt were less critical to them. Thanks again for all your work.

#23 backwoody

backwoody

    Apollo

  • -----
  • Posts: 1476
  • Joined: 08 Jan 2007
  • Loc: Idaho USA

Posted 12 May 2008 - 10:25 PM

Bravo, EdZ.

clear skies and steady hands,

#24 EdZ

EdZ

    Professor EdZ

  • *****
  • Posts: 18820
  • Joined: 15 Feb 2002
  • Loc: Cumberland, R I , USA42N71.4W

Posted 13 May 2008 - 05:29 AM

I made some changes to the scoring system since this was published. You are correct there are aspects that are more important to either terrestrial viewing or astronomical viewing, but not important to both. There are some that are detrimental to one or the other. For instance curvature is beneficcal to terrestrial viewing, but is detrimental to astro viewing. Pincushion is a benefit to terrestrial viewing but has absolutely no affect on astro viewing.

These changes are incorporated into separate terrestrial score and astro score in the documents for the final article. While it does make a slight difference in specific ranking, it makes almost no difference in general placement on the chart. The top binoculars are still all near the top and the bottom binoculars are still all near the bottom.

edz

#25 Wes James

Wes James

    Fly Me to the Moon

  • *****
  • Posts: 5504
  • Joined: 12 Apr 2006

Posted 13 May 2008 - 03:34 PM

EdZ-
Your work is as close to computer generated accuracy as one could hope to find in comparing so many items where so many things seem so subjective. To me, optics are as personal as any object- yet you manage to consistently break it down to numbers and figures that are as objective as can be. I don't think I've ever read anyone who remains so machine-like in their evaluations of ANYTHING! :bow:






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics