Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

80 cm >> or > over 70 cm ?

  • Please log in to reply
13 replies to this topic

#1 Lucullus

Lucullus

    Viking 1

  • -----
  • topic starter
  • Posts: 817
  • Joined: 14 Feb 2012

Posted 31 March 2020 - 06:14 AM

There are many many things one can do with a telescope to contribute to science: astrometry of asteroids, photometry of all sorts of things from asteroids, comets, to variable stars, lightcurve measurements, stellar occultations by asteroids, impact events on the moon during meteor showers, monitoring of the planets, spectroscopy of all sorts of objects etc.

With focal ratio, photographic equipment, tracking/guiding quality, and all specifications the same, while one telescope has 80 cm aperture and the other 70 cm and the whole weights are within the respective equatorial mounts' photographic weight bearing capacity, and the pixel scale well sampling the usual atmospheric conditions, by how far would the larger telescope surpass the smaller and in which applications?



#2 TOMDEY

TOMDEY

    Fly Me to the Moon

  • *****
  • Posts: 6,374
  • Joined: 10 Feb 2014
  • Loc: Springwater, NY

Posted 31 March 2020 - 08:37 AM

I wrote a brief white-paper on that generalized subject long ago. Up to the resolution limit of the field-configured instrument, the cosmic information that an instrument is capable of collecting goes quintic with aperture diameter; beyond that it diminishes to cubic. The knee in that curve is of course quite rounded, because lateral spatial resolution is not either-or abrupt, but throttled by the entire Modulation Transfer Function of the system. Note that (80/70)5 is 1.95 and (80/70)3 is 1.49. So, in your example of aperture selection alternatives, the bigger scope outperforms the smaller by somewhere between 50% guaranteed and 100% percent possible. This quantitatively galvanizes the qualitative assertion that "aperture rules". Daniel Malacara and I wrote and presented a closely-related white-paper in that era, "Telescope Quality - How Good is Good Enough". Astronomer  Martin Harwit extended, generalized, and formalized the topic, covering it in detail, in his delightful seminal tome "Cosmic Discovery". It's a great read; will absorb in a few readings... small doses recommended. Mathematically-reserved, logically-rigorous, but philosophically complex. True Gestalt. Your perspective will never be the same, after absorbing his thesis.

 

These considerations motivated me to upgrade from my trusty 74cm to my current 91cm.   Tom

Attached Thumbnails

  • 33 aperture fever presentaton handout 80.jpg
  • 32 Aperture Fever fan giant fish tank aquarium 88.jpg

  • NerfMonkey and Lucullus like this

#3 Lucullus

Lucullus

    Viking 1

  • -----
  • topic starter
  • Posts: 817
  • Joined: 14 Feb 2012

Posted 01 April 2020 - 06:28 AM

I wrote a brief white-paper on that generalized subject long ago. Up to the resolution limit of the field-configured instrument, [...]

Very interesting, thank you Tom.

Can your white-paper be accessed publicly? I didn't find it on Google and other sources.
 



#4 TOMDEY

TOMDEY

    Fly Me to the Moon

  • *****
  • Posts: 6,374
  • Joined: 10 Feb 2014
  • Loc: Springwater, NY

Posted 01 April 2020 - 07:08 AM

Very interesting, thank you Tom.

Can your white-paper be accessed publicly? I didn't find it on Google and other sources.
 

I made the rounds with that decades ago among the Niagara Frontier Council of Amateur Astronomy Associations, a northeast association of clubs. We shared speakers etc. and the collaboration worked nicely. I'm not aware of that still being an active mechanism. Buffalo, Syracuse, Rochester, Toronto (?) etc. I blended of Harwit's "Observational Phase-Space" within the context of instrumentation available to Amateur Astronomers, for that era. That's where my aperture diameter quintic to cubic function arose in the context of available, collectable information. The link was the (reasonable) assumption that local spacetime is isotropically and homogeneously populated with information, on the large scale. Zip that together with Emmy Noether's Theorem... and the vacuum-quintic, ground-based cubic advantage curve pops out, as a necessary consequence of information theory. Back in those days, very few people thought in terms of Info Theory or Phenomena Quantification. Astronomers intuitively wanted bigger and bigger telescopes, but had a hard time explaining why.

 

Your 80/70 posit directly relates, though. And the 95% potential improvement is significant!

 

I can't find my old notes, other than that one-sheet handout above. But Harwit's book is still available, used on AbeBooks, for as little as $5, including shipping!    Tom

Attached Thumbnails

  • 41 Harwit Cosmic Discovery 80 80.jpg
  • 42 Harwit on Cosmological VLF EM Limit 75 80.jpg


#5 robin_astro

robin_astro

    Viking 1

  • -----
  • Posts: 934
  • Joined: 18 Dec 2005

Posted 01 April 2020 - 08:34 AM

What about field of view?  Other things being equal (detector area, focal ratio), halving the aperture quadruples the area surveyed in a given time (which is finite for all of us) 

 

Robin



#6 robin_astro

robin_astro

    Viking 1

  • -----
  • Posts: 934
  • Joined: 18 Dec 2005

Posted 01 April 2020 - 08:54 AM

The information available  also depends on the signal/noise ratio (ie the signal relative to the sky background) not just the signal, which is independent of aperture



#7 Lucullus

Lucullus

    Viking 1

  • -----
  • topic starter
  • Posts: 817
  • Joined: 14 Feb 2012

Posted 01 April 2020 - 03:23 PM

Is it possible to make a similar performance estimate as Tom concerning SNR for various telescope apertures based on information theory?


Edited by Lucullus, 01 April 2020 - 03:23 PM.


#8 robin_astro

robin_astro

    Viking 1

  • -----
  • Posts: 934
  • Joined: 18 Dec 2005

Posted 01 April 2020 - 07:24 PM

It is possible to calculate SNR for a particular object brightness, telescope and sky conditions. A good source for this is Prof. Michael Richmond's photometry calculator here

http://spiff.rit.edu...nd/signal.shtml

If you plug in some typical figures and chose a target SNR at the limit of usefulness say SNR=3 you find that doubling the aperture from say 20 to 40 cm allows you to go about 1 magnitude fainter or in terms of distance, 1.6x further.

 

If you are interested in spectroscopy then other factors need to be considered. I have found these calculators by Christian Buil to be reasonably reliable

http://www.astrosurf...ute/compute.htm

 

Robin


Edited by robin_astro, 01 April 2020 - 07:26 PM.


#9 robin_astro

robin_astro

    Viking 1

  • -----
  • Posts: 934
  • Joined: 18 Dec 2005

Posted 01 April 2020 - 09:09 PM

Is it possible to make a similar performance estimate as Tom concerning SNR for various telescope apertures based on information theory?

To talk about information then we need to consider what the telescope is actually doing. It is collects a number of photons in a given time from the universe within a given solid angle. For each photon we can measure the following parameters:-

 

The time of arrival

The energy (wavelength)

The direction it came from (which translates into the X,Y coordinates in the image)

 

(Note the distance each photon has travelled is not known. It could have come from our own solar system or the Big Bang) 

 

If we change the aperture then the time of arrival and the energy of each photon are unchanged and provided we are seeing limited the precision of the X,Y coordinates are also unchanged

 

The only thing that changes with aperture is the number of photons collected per unit time which is proportional to the collecting area ie the aperture squared  (ie the information increases with aperture squared, not cubed as suggested.) 

 

(Note this simple analysis ignores the effect of unwanted photons from the  sky background)

 

Robin



#10 TOMDEY

TOMDEY

    Fly Me to the Moon

  • *****
  • Posts: 6,374
  • Joined: 10 Feb 2014
  • Loc: Springwater, NY

Posted 01 April 2020 - 09:12 PM

The quintic transitioning to cubic is the theoretically-available information, assuming that object-space is temporally-invariant aka not detectably varying on time scales shorter than our total assumed collects, for each/any/every targeted field. For nearly all traditional amateur observations, this is the case. To the extent that is valid... this means that, although it may take a long time to survey the entire sky, if one is patient, he ultimately realizes the quintic-cubic advantage; big tortoise inexorably winning the arbitrarily long marathon, over any smaller hare. Unlike the professionals, we guys embrace astronomy as a casually-paced, relaxing hobby, so max burn rate ostensibly takes a back seat to the pleasure of embracing the Cosmos, on Mother Nature's terms. At least that's the working model of the avocation.

 

I know, I know... in actuality, many of us get impatient waiting for those sparse photons to come lazily drizzling down the tube and into the detector array... so information rate, which goes quadratic with aperture and quadratic with field (aka, rigorously ├ętendue) takes precedence. And, no coincidence... Emmy Noether's Theorem formalizes that invariant. So, in the limit, we craft bigger/faster systems and fly-eye hyperhemispherical parquets, giant cryo max QE gigapix arrays, to suck in information as much and as fast as possible.

 

And yet, the quintic/cubic ceiling remains, despite all that --- as the limiting arbiter, regardless. Bigger will always win, given enough time... [Martin Harwit's Thesis covers all this kinda stuff. The nifty thing about his Gestalt is that our minutia computations that we routinely labor over, in no way alter that invariant conclusion.]    Tom



#11 robin_astro

robin_astro

    Viking 1

  • -----
  • Posts: 934
  • Joined: 18 Dec 2005

Posted 02 April 2020 - 06:44 AM

Hi Tom,

 

 Bigger will always win, given enough time...

Yep I agree but to what extent depends on the question asked of the data.

 

Although the available data increases as the square of the aperture as I demonstrated above, in the case here where you are asking the specific question "how many discrete sources can we detect" then in a seeing limited situation and given a homogeneous and isotropic distribution of sources  your estimate of the number of sources detected increasing as the cube of aperture is not unreasonable. (Though Herr Olbers might also have something to say about it !) This comes about because of the redundancy in the data ie put crudely in this case we are only interested in the first photon arriving from the source, any others are just redundant so this favours the more distant sources increasing the volume of space accessed  

 

The astrometrist, photometrist,spectroscopist, variable star observer, planetary observer etc would disagree with you (and with each other) about the redundancy however and each would see the effect of aperture on their measurement differently.

 

 

Cheers

Robin


  • Stu Todd likes this

#12 Lucullus

Lucullus

    Viking 1

  • -----
  • topic starter
  • Posts: 817
  • Joined: 14 Feb 2012

Posted 02 April 2020 - 07:17 AM

It is possible to calculate SNR for a particular object brightness, telescope and sky conditions. A good source for this is Prof. Michael Richmond's photometry calculator here

http://spiff.rit.edu...nd/signal.shtml

If you plug in some typical figures and chose a target SNR at the limit of usefulness say SNR=3 you find that doubling the aperture from say 20 to 40 cm allows you to go about 1 magnitude fainter or in terms of distance, 1.6x further.

 

If you are interested in spectroscopy then other factors need to be considered. I have found these calculators by Christian Buil to be reasonably reliable

http://www.astrosurf...ute/compute.htm

 

Robin

In other words you say that doubling the aperture allows you to see the same very same object 1.6x farther away (1 magnitude dimmer). Tom argues a factor of 2x. Where comes this difference? 
 



#13 robin_astro

robin_astro

    Viking 1

  • -----
  • Posts: 934
  • Joined: 18 Dec 2005

Posted 02 April 2020 - 08:08 AM

In other words you say that doubling the aperture allows you to see the same very same object 1.6x farther away (1 magnitude dimmer). Tom argues a factor of 2x. Where comes this difference? 
 

Tom's calculation is for the theoretical situation where the exposure time is infinite, the detector has no noise and there is no noise contribution from the sky.

 

To detect an object you need to be able to distinguish it from the sky background so in practical situations the  noise adversely affects the fainter objects. You can explore this using the calculator by changing the sky brightness and leaving other factors constant. If you set all camera noise contributions to zero and set the sky to a very high value, you get a 1.5 magnitude gain which equates to 4x the signal as expected for the increased aperture area which equates to sqrt(4) x distance for the same luminosity

 

A worked example using the calculator calculating the limiting magnitude for SNR=3

 

A 10 sec exposure with perfect camera with QE = 1 and no read noise, measured at the zenith (air mass =1) with 2 arcsec seeing, measured over a radius of 2 arcsec

 

First with a typical dark sky brightness of 21/arcsec2

 

20cm aperture   limiting magnitude (SNR=3) 20.3

40cm                                                               21.1

 

a difference in magnitude of 0.8 = 1.5x further for the same luminosity

 

Now with an impossibly dark sky brightness of 30/arsec2

 

20cm aperture    limiting magnitude (SNR=3) 22.6

40cm                                                               24.1

 

a difference in magnitude of 1.5 = 2x further for the same luminosity


Edited by robin_astro, 02 April 2020 - 08:16 AM.

  • Stu Todd likes this

#14 TOMDEY

TOMDEY

    Fly Me to the Moon

  • *****
  • Posts: 6,374
  • Joined: 10 Feb 2014
  • Loc: Springwater, NY

Posted 02 April 2020 - 08:36 AM

When resolution is (substantially) seeing-limited, the information rate is quadratic, but the total available information is cubic. These are limits; actuality will be inbetween. To Robin's good point regarding astrometry, etc. --- of course, those time-sensitive exceptions apply, as I noted --- so those specialists agree with my qualifier. Indeed, one of my early specialties was ... object-space event metering! Starting with photogrammetry of the earth, from air-breathers to imaging satellites... eventually to GPS operations and remote docking. Here's a cartoon that I came up with, for one of my road-show talks. The topic was ~Telescope Quality~ in the context of Emmy's Theorem. At the time, I was investigating/quantifying the impact of ~cosmetic flaws~ on cameras' performance, with surface and coating blemishes center-stage, and planet-finder missions of special interest. I had also been a vacuum coating engineer/scientist for twelve years, so had lotsa experience in those residual imperfections  affecting mission performance, data integrity. When the audience was astronomers, I coined the phrase "Cosmic Cosmetic Flaws", so they would forever remember the alliteration, hence the consul: to ... keep their grubby mitts off the mirrors and lenses... a kinda ~social distancing~!   Tom

Attached Thumbnails

  • 45 remote sensing Toms chart.jpg
  • 46 90 toms cosmetic flaws study mangin mirror.jpg



CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics