Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

The flashlight Test for Aperture - Illustrated

This topic has been archived. This means that you cannot reply to this topic.
420 replies to this topic

#401 Asbytec

Asbytec

    Guy in a furry hat

  • *****
  • Posts: 16,142
  • Joined: 08 Aug 2007

Posted 15 September 2013 - 06:24 AM

Guys, there has got to be a way to settle the question of the test's reliability. I would propose setting up an experiment based on the mathematical model that is, not only guaranteed to vignette effective aperture, but to do so in a very predictable way. Push a laser down the exit pupil and see if the results match predictions.

I performed the test as a newbie, as carefully as I could, and found it to be consistent for reasons Frank said it would. It was also consistent with other measurable tests to the extent those measurements could be accurate. It seems to be consistent with visual inspection of the internal baffles and stops (more qualitative evaluation.) And the mathematical model produced similarly consistent results. (By consistent, I mean all measures and models gave an approximate effective aperture to within a couple of mm, plus or minus, between 140 and 143mm.)

Glenn, as I understand it, tested the close to focus issue and found that shadow to be sharp. And has undoubtedly done the test numerous times without stumbling over a valid reason to question it. I don't remember Frank's test results, but we can trust they were consistent with the model he described above.

So, we have a model, a way to make a prediction, and a way to test those predictions. If that method to produce an effective aperture cannot produce results consistent with the mathematical model (based in optical theory), then it's doesn't work. So far, the preponderance of the evidence seem to suggest the test is valid to within a negligible degree of error.

Has anyone else, besides May Ed Holland a year or so ago, performed this test and have results to share? Otherwise, the banter can continue until people tire of the inconclusiveness, tempers erupt, or we begin dreaming of beer suds instead. It might be time to draw a conclusion and wrap this up rather than repeat the same arguments without achieving something near universal consensus.

My conclusion is, notwithstanding Frank's valid points, in accord with personal experience, and the learned testimony of others - a carefully performed laser test is robust enough to be of practical use.
 

#402 jpcannavo

jpcannavo

    Apollo

  • *****
  • Posts: 1,034
  • Joined: 21 Feb 2005

Posted 15 September 2013 - 07:51 AM

Has anyone else, besides May Ed Holland a year or so ago, performed this test and have results to share? Otherwise, the banter can continue until people tire of the inconclusiveness, tempers erupt, or we begin dreaming of beer suds instead. It might be time to draw a conclusion and wrap this up rather than repeat the same arguments without achieving something near universal.


The two issues I see are the thread itself and the test. The thread has evolved into a state where it's basically an ongoing iteration of "A is the case, no B is the case, no A is the case, no B is the case, no A is the case..." Not that there is problem with that. (but it is starting to remind me a bit of political discourse!) And then there is the test itself. At this point I would offer this: Reconsider the content of this thread - in terms of broad brush strokes (I.e. the kinds of arguments/points that are being raised) - but now substitute, say, the the barlowed laser test for the flashlight test. What conclusions would one now draw with respect to the manner in which a community, such as this, should deal with the introduction of a new method?
 

#403 Dave O

Dave O

    Viking 1

  • *****
  • Posts: 624
  • Joined: 21 Dec 2011

Posted 15 September 2013 - 07:55 AM

Frank,
I seem to recall that this report on the C11's seeming full aperture performance with a 2" diagonal, yet reduced aperture with the 1.25" diagonal, did not fully disclose all pertinent details, such as a check on beam taper. Before gleefully embracing one contradictory test result as affirmation of one's prejudices, best to first ascertain that operator error is not the culprit.


Well, I think that this may be Frank's point. The 'instructions' for performing the test (as given in the first post) do not ensure that the initial conditions required for an accurate measurement using this test will be met; perhaps leading to the inaccurate result as measured by the operator.

One failed test following the given procedure, is proof that the test itself requires further modification to prevent such failures. Without the 'theory' to understand that what they were reporting made no real sense, they naturally assumed that what they measured was a reduced working aperture resulting from their smaller diagonal.
 

#404 Asbytec

Asbytec

    Guy in a furry hat

  • *****
  • Posts: 16,142
  • Joined: 08 Aug 2007

Posted 15 September 2013 - 08:15 AM

What conclusions would one now draw with respect to the manner in which a community, such as this, should deal with the introduction of a new method?

Well, if it's relative easy, reliable and accurate when done with care to minimize operator error, we should embrace it - if one has cause to use such a test. Such was my case, the question about SCT vignetting at longer back focus, or the size of a Newt diagonal, or even a home built refractor baffle all apply.

Much of this can be done with math if distances are known, and often they are. For example, calculating the size of a Newt secondary depends on such distances we either measure or set them. If not, it seems plausible a laser can do the job fairly well if not very well.

I dunno, time to move on, I think. I appreciate the debate, a lot of good information has been presented (Vla, Glenn, Frank, et al.) Really we should be nearing a point of general agreement. Seems that's not going to happen so little new will come of he said vs he said. I can only draw on what was said and my own results.

Besides, I'm out of popcorn. :) You?
 

#405 jpcannavo

jpcannavo

    Apollo

  • *****
  • Posts: 1,034
  • Joined: 21 Feb 2005

Posted 15 September 2013 - 08:25 AM

Frank,
I seem to recall that this report on the C11's seeming full aperture performance with a 2" diagonal, yet reduced aperture with the 1.25" diagonal, did not fully disclose all pertinent details, such as a check on beam taper. Before gleefully embracing one contradictory test result as affirmation of one's prejudices, best to first ascertain that operator error is not the culprit.


Well, I think that this may be Frank's point. The 'instructions' for performing the test (as given in the first post) do not ensure that the initial conditions required for an accurate measurement using this test will be met; perhaps leading to the inaccurate result as measured by the operator.

One failed test following the given procedure, is proof that the test itself requires further modification to prevent such failures. Without the 'theory' to understand that what they were reporting made no real sense, they naturally assumed that what they measured was a reduced working aperture resulting from their smaller diagonal.


Fine.
But that can all evolve in the context of ongoing usage in the field. Innsisting that everything new (as opposed to those things that bear on particularly high stakes contexts) must emerge fully elaborated can have the effect of straight jacketing progress. What if Dobsons first scopes had been resisted by this community, with endless naysayers essentially insisting on a level of elaboration/perfection that implicitly called for everything from truss poles to flotation cells to equatorial platforms, before they would consider any field use.

Without digressing too much along lines that bear on other fields - such as my own - progress in a culture requires a healthy balance of intrepid inovative spirit and stringent methodical control. Too much of the latter and progress gets straightjacked into a a collection of nonstarters, too much of the former and sloppy, dangerous results emerge as standar practice. Remember we are talking about a context where nylon slings are proposed for suspending Newtonian optics, as opposed to keeping people from flying through windshields.
 

#406 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 9,117
  • Joined: 12 Oct 2007

Posted 15 September 2013 - 08:26 AM

My personal need for any sense of forward progress has been the same from day one. I need to see a model for the measurement that includes the bare minimum of relevant parameters and systematic errors. These are:

Parameters:
Diameter
Focal Length
Size of obstruction
Distance of obstruction from focus

Errors (these are inter-related):
Shadow diameter measurement
Taper of the beam
Error in measuring distance from focus

The model needs to be in some form that allows a calculation of the corresponding error in the entrance pupil diameter.

In my case I have mostly been ignoring error in the shadow measurement itself, and instead just asking for tolerance on the allowed taper. The last value I heard on that was 2', which is quite big.

But you could also provide a separate model, which is the exact procedure for "collimating" the beam - and its corresponding errors. That would allow an estimation of the resulting taper based on the model for the collimation procedure.

It's just not allowed to pull errors out of the air and "confirm" them by isolated measurements. That "confirmation" doesn't tell anything about how well it will work in a different scenario, or as performed by a different person.

Frank
 

#407 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 9,117
  • Joined: 12 Oct 2007

Posted 15 September 2013 - 08:39 AM

What is the accuracy in a value for magnification? 1%? Does it depend on eyepiece focal length? Telescope focal length?

M = f/e

dm = 1/e*df - f/e^2*de

dm/M = df/f - de/e

This shows how errors in the two values affect the result. The error in mag. doesn't depend just on one value or the other - but both.

Interestingly, the fractional change in mag. is equally dependent on the fractional error in both focal lengths.

None of this insight into the expected error could be derived without a model and error propagation, so without it you can't really say what the error in mag. is except in isolated test cases where somehow you know the exact answer.

Frank
 

#408 GlennLeDrew

GlennLeDrew

    Hubble

  • *****
  • topic starter
  • Posts: 15,849
  • Joined: 17 Jun 2008

Posted 15 September 2013 - 08:44 AM

A 2 arcminute beam taper is 'quite big'??
 

#409 jpcannavo

jpcannavo

    Apollo

  • *****
  • Posts: 1,034
  • Joined: 21 Feb 2005

Posted 15 September 2013 - 08:55 AM

My personal need for any sense of forward progress has been the same from day one. I need to see a model for the measurement that includes the bare minimum of relevant parameters and systematic errors. These are:

Parameters:
Diameter
Focal Length
Size of obstruction
Distance of obstruction from focus

Errors (these are inter-related):
Shadow diameter measurement
Taper of the beam
Error in measuring distance from focus

The model needs to be in some form that allows a calculation of the corresponding error in the entrance pupil diameter.

In my case I have mostly been ignoring error in the shadow measurement itself, and instead just asking for tolerance on the allowed taper. The last value I heard on that was 2', which is quite big.

But you could also provide a separate model, which is the exact procedure for "collimating" the beam - and its corresponding errors. That would allow an estimation of the resulting taper based on the model for the collimation procedure.

It's just not allowed to pull errors out of the air and "confirm" them by isolated measurements. That "confirmation" doesn't tell anything about how well it will work in a different scenario, or as performed by a different person.

Frank

Frank
There is a distinction here that must made clear. I don't object to the parameters you describe, as being useful endpoints. What I object to is an implicit claim that reduces the test to a nonstarter in their absence. The notion that the test must first come with a "spec sheet" - including everything from sensitivity to negative predictive value - is, I feel, frankly misguided.

I am also curious about your usage of the word "need" above. In exactly what sense is it being used. That is: All that you are insisting on is needed so that...?
 

#410 GlennLeDrew

GlennLeDrew

    Hubble

  • *****
  • topic starter
  • Posts: 15,849
  • Joined: 17 Jun 2008

Posted 15 September 2013 - 09:02 AM

This topic would seem to have long since run its course. That a request for contributions in the way of data has resulted in essentially nothing, it's evident that the only interest in the subject is as a forum for the dissemination of theory. Without data, all the blather in the world must be taken with the proverbial grain of salt. Until meaningful data appear here, I will enjoy a sabbatical.
 

#411 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 9,117
  • Joined: 12 Oct 2007

Posted 15 September 2013 - 09:27 AM

I am also curious about your usage of the word "need" above. In exactly what sense is it being used. That is: All that you are insisting on is needed so that...?



From the very beginning, I have been addressing the fact that an error of 1mm is stated for the accuracy of the test. It is not my test and it is not my statement of error. If you state an expected error, and if you state that the test has general application - then you are obligated to show your homework that the error has some basis. The only way I know to do that is with a model and with error propagation - which is standard and required practice in experimental science. For anyone who said something about how science or experiments work - they should be fully on my side requesting this.

For the people who think this test is fine and good enough - is it good enough for the maksutov in my first question? Let's say with an aperture 500mm from focus? And let's say with Glenn's reaffirmed taper tolerance of 2'?

One answer some people might have is - you have no idea until you have done the experiment. In some sense that is ok - but then they have to say the test should not be regarded as reliable in a general situation - which makes it somewhat useless.

For those who feel strongly they can extrapolate success with an 8" sct and 120mm from focus, to this system with a 2' beam taper - can you tell me how the taper would affect the measurement?

This is all getting back to my very first question - which has never really been addressed except by me.

There is a separate pedagogical aspect of this that I think both Glenn and I share - which is to convey some understanding of the entrance pupil. The primary enlightening thing I would want people to know is that it has a clear definition in optics, and it has both a size and location that are very important. I'm not sure how many people got that message - but you can read about it in any textbook.

Frank
 

#412 wh48gs

wh48gs

    Surveyor 1

  • *****
  • Posts: 1,840
  • Joined: 02 Mar 2007

Posted 15 September 2013 - 02:40 PM

Consider the canonical example I introduced in my first post in this thread: a 500mm (aperture) f/15 maksutov. Place the stop some "comfortable" distance from focus such as 500mm. Then determine the distance to the entrance pupil. Furthermore, since there is a test of collimation required, by making sure the shadow size is constant over a long throw, you must measure it both in front of the telescope, and again very far away - far enough to confirm there is minimal taper to alter the shadow size. That even increases the distance to the pupil - and any alteration of the diffraction effects over that throw would make it hard to confirm the beam is "collimated" and the same size at two points. This is where the 25% vs. 50% geometrical edge could bias both the collimation assessment, and the final measurement.



That is not needed, if the setup starts with a telescope focused at infinity, and collimated light entering the eye lens of the ep. A 1/100 lens' f.l. axial shift in the location of ep-formed point-source (Airy pattern) would produce exit beam at the objective converging or diverging as if coming from an object 100 f.l. away. In this particular case, that would produce 1% change in the beam width at 7.5m in front of the objective. Not very practical, and not needed, since it is pretty much impossible to cause anything remotely close to 7.5mm shift of the ep front focus in such a setup (assuming reasonable ep f.l. generally less than 20mm).

To the other extreme, 80mm f/5 refractor will have a 100 lens' f.l convergence/divergence factor of the beam projected back through the lens with as little as 0.4mm shift in the axial position of point source from the infinity focus. It could be measured at a few lens' focal lengths in front of it. But even for that, the light falling onto the ep would have to be as if spreading from 25 f.l. lengths away for 10mm ep (about 10 inches). Not even close to collimated.

If it is to make sure the setup is accurate in this respect, why not simple point the telescope at a distant object right after the test?

Since the shadow is round and not planar it certainly isn't a direct match to the semi-infinite plane - but for a fairly large aperture over a long throw I expect it to be in the ballpark of the *scale* of the blurring of the diffraction edge.


Why "shadow"? Light beam projection is light; an opaque obscuration produces shadow.

In reality, aberration of the pupil may completely dominate other sources of error - I'm not sure. It is certainly another source of error in an imperfect optical system expected to create a crisp and magnified shadow image a long distance away.



All pupil aberrations are dependent on the height in the image plane. As long as we remain on axis, they're all zero. Geometrical pupils are not relevant. The only "pupil" we're interested in is the projection of replicated beam onto the front telescope surface, where all possible aberrations are dwarfed by the defocus error.

Vla
 

#413 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 9,117
  • Joined: 12 Oct 2007

Posted 15 September 2013 - 03:57 PM

Ironically I may be one of the more experimentally minded people in this thread. I don't think in terms of "collimated" - I think in terms of nearly collimated - to within an experimental tolerance. "Collimated" is only a theoretical ideal on paper. Glenn has stated that 2 arc-minutes collimation is an acceptable tolerance. So what are the implications for the Mak system I asked about on day one? What if the true entrance pupil is 490mm - just under the 500mm front opening?

As for pupil aberrations - I have been trying to step slowly from pure Gaussian imaging, to Gaussian imaging with diffraction - and then at least a mention of aberrations of the pupil. Any time you have a real system doing some kind of imaging with light moving through human-made glass surfaces - there will be aberrations.

It is only a crisp shadow of the exact size of the entrance pupil - on paper. A realistic measurement will involve non-negligible errors that end up playing a bigger role when the focal length is long and the pupil distance is large.

If the pupil distance is very short - all of this stuff is negligible. That is the case for wrong-sized binoc. objectives, and even a maksutov with the stop at the primary. But for a C11 with possible stop in the diagonal, or the 500mm mak I asked about - they can be quite large for something as small as a 2' taper.

Frank
 

#414 wh48gs

wh48gs

    Surveyor 1

  • *****
  • Posts: 1,840
  • Joined: 02 Mar 2007

Posted 16 September 2013 - 02:02 PM

So what are the implications for the Mak system I asked about on day one? What if the true entrance pupil is 490mm - just under the 500mm front opening?



With a point-like source at the focus emitting light backwards through the system, it will stop down the slightly converging cone from the primary, and the true entrance aperture will be projected onto the front meniscus surface (it may be a bit tricky to separate from up to a few mm wider beam projection onto the inner meniscus surface; don't have a Mak, so can't tell).

As long as the point-like source is at the focus, the only thing that matters is the physics of the diverging beam. For any given aperture restriction, it matters little where the stopping baffle is located. It is the energy field configuration after it that produces the final beam projection, and that configuration is not affected by the baffle location (it's illustrated on my previous attachment). It will be a grossly defocused PSF of the source, with the effective f-number determined by the stop.

This attachment illustrates how defocused PSF behaves with increasing defocus (generated by Suiter's Aperture; horizontal scale is in units of LambdaF). For relatively small (in this context) defocus values, some energy spills over the edge of the geometric image radius, which is given by 4W, in units of LambdaF, W being the p-v wavefront error of defocus. As the defocus error increases, this energy is gradually pulled back into the geometric blur. At about 80 waves defocus, and larger, the two are identical.

Secondary to it, the relative width of the zone of decreasing intensity toward the edge is shrinking and becoming steeper, making the edge better defined (note that this is logarithmic intensity scale, so the nominal intensity decreases much faster, but what is perceived by eye is closer to the logarithmic line). Since the defocus error at the front surface will be always much larger than 100 waves, we can expect a true-to-its-geometric size beam projection with well defined edges.

This is in agreement with what the Fresnel diffraction imaging concept in Wyant's paper predicts (the only difference is that it assumes point source of diverging light, not an Airy pattern, but it can be neglected). The farther away from source opening, the more Fresnel zones it contains, and the closer is diffraction image in the plane of observation to the beam's geometric projection.

Vla

Attached Thumbnails

  • 6084121-defo.PNG

 

#415 GlennLeDrew

GlennLeDrew

    Hubble

  • *****
  • topic starter
  • Posts: 15,849
  • Joined: 17 Jun 2008

Posted 16 September 2013 - 03:54 PM

As Vla and I are saying, the location of the entrance pupil is irrelevant in this test. If beam taper is brought an acceptably small amount, the beam diameter at the last optical surface upon emergence is the effective aperture.

I've already found that for a 2,000mm f.l. system, an obstructor located from the focus at a distance of only 5% of the focal length still results in a beam measureable to within about 1mm, or to about 99.5% accuracy.

I would say a 500mm aperture Mak is measureable to 99% accuracy.
 

#416 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 9,117
  • Joined: 12 Oct 2007

Posted 16 September 2013 - 04:46 PM

Fresnel diffraction is a secondary effect Glenn. I have only mentioned diffraction as something that - by itself - blows your original 1mm stated accuracy.

Please show your model for the size of the shadow as a function of beam taper and location of the aperture stop near focus - assuming only geometric optics. If you want to include diffraction or anything else - that is fine - but you have already allowed 2' taper as acceptable - so you would need to model that.

What is the equation you use to state 99.5% as the expected accuracy in a very general sense - including my mak with a stop 500mm from focus, and a beam taper with your accuracy of 2'?

Can you draw a diagram that shows how such a taper affects the measurement? I have drawn many diagrams to convey these ideas - whereas you only show diagrams with a collimated beam. What happens with a 2' taper for my Mak?

The fact that you are using "percents" is exactly my point. You can't use percents here - and you can't use them with a spherometer. You can only use percents if you have a model that says the error is proportional to something.

The beam coming out could be divergent or convergent, and the sigma is 2 arc minutes. How does that affect the shadow size for my mak. What is the model you use for this scenario.

I thought you were on sabbatical - but if you have rejoined the discussion - note that once again I ask the same question I have always been asking.

What is the model you use as a basis to make general statements about expected accuracy when the beam is not collimated and has experimental error?

Frank
 

#417 GlennLeDrew

GlennLeDrew

    Hubble

  • *****
  • topic starter
  • Posts: 15,849
  • Joined: 17 Jun 2008

Posted 16 September 2013 - 05:43 PM

Earlier on I presented a set of numbers for a 25" f/4 with a 1.25" focuser 3mm too long (i.e., protruding 3mm into the on-axis light cone), and the resulting effect on beam diameter error when 2' of beam taper is present. This would constitute a fairly severe test, where the fast f/ratio and still fairly long focal length make for not inconsiderable sensitivity to incorrect axial location for the light source image. See if these results accord with your own calculations (on purely geometrical grounds.)
 

#418 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 9,117
  • Joined: 12 Oct 2007

Posted 16 September 2013 - 06:08 PM

Why did you come back from sabbatical if you still don't have a model for how this test works? You can't use words like "severe" if you have no idea what does and does not matter in the test.

What is the model you are relying on to say taper is severe in one situation and not in another.

What is needed even to say the word "severe" is a model for the system including experimental errors - and worked calculus showing the error propagation. This is high school stuff so I am not asking for much - and it is *basic* to experimental measurement.

If you think everything is proportional to diameter - or something - you need a model to justify that. If you think fast f/ratio matters when the taper is already specified - you need a model for that.

Frank
 

#419 kingjamez

kingjamez

    Vanguard

  • *****
  • Posts: 2,160
  • Joined: 03 Oct 2006

Posted 16 September 2013 - 06:46 PM

Wow, well I've learned that apparently I need to submit everything for peer review before posting anything I've found helpful that I want to share with others.... Guess I won't be doing that anytime soon. Thanks for teaching me Frank. I sure did miss a lot in "high school" :foreheadslap:

I just used this test to discover that my "9x63" binoculars from a well known vendor only have 55.0003mm of useable objective diameter. I took 4 measurements so I know it's not 55.0004mm....

In all seriousness, it was good to be able to test and explain why I've always thought that pair of binoculars were a bit sub par.

-Jim
 

#420 GlennLeDrew

GlennLeDrew

    Hubble

  • *****
  • topic starter
  • Posts: 15,849
  • Joined: 17 Jun 2008

Posted 16 September 2013 - 08:35 PM

Frank,
I have provided a few experimental results, using equipment in hand, which so far support my thesis. I have provided a hypothetical case involving a large aperture system, with detailed calculations based on geometry. In all instances, sufficient data has been provided for independent verification or refutation.

I have made my case. Others seem to be in general agreement. You are the outlier. If you think I am wrong, prove it with experimental evidence.

Or is your motivation merely to try and catch me out on points of detail?

My prime purpose is to provide a means whereby the user might determine, to within 1mm or 1%, whichever is the larger, the working aperture of an optical system.

This 'project' started out with smaller systems in mind. By including ever larger systems, the accuracy has been realized and acknowledged as being potentially worse. Hence the evolution. If you insist on holding my feet to fire by excluding all following the very first post, that would be unfair. This is an evolving discussion, not a peer review paper.

It is hoped that the interested worker will be cognizant of this aspect of varying accuracy with system size. And if suitably motivated, to investigate means of improving accuracy.

But for just about any real world system used by amateurs, the result of this test, using collimated laser light, will be accurate to the 99% level, or better. Sure, one can invoke gargantuan systems with ridiculously near-to-focus obstructors. But let's be realistic...

Now, YOU prove this to be wrong. Provide testable EVIDENCE, as I have done.
 

#421 richard7

richard7

    Not Quite

  • *****
  • Posts: 5,927
  • Joined: 02 Nov 2007

Posted 16 September 2013 - 08:53 PM

Seeing that this thread is heading nowhere it's about time to say goodnight.
:lock2:
 


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics