Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Nyquist Sampling Theory - Empirical research – Double star – High resolution

  • Please log in to reply
15 replies to this topic

#1 GA-HAMAL

GA-HAMAL

    Messenger

  • -----
  • topic starter
  • Posts: 455
  • Joined: 18 Aug 2014
  • Loc: Poland

Posted 12 September 2019 - 01:26 PM

Hi smile.png
 
I read a lot of theories and discussions on this topic, I decided to experience this issue empirically. I found a binary star, not too bright, not too dark, which, using my telescope and camera, would initially show too low sampling frequency (undersample).
 
The equipment used is: Telescope 320/1500, Camera ASI290mm-C pixel 2.9µ
Double star: HIP 72153
 
Parameters of the studied pair of stars:
Sep     Mag1   Mag2     Delta    Spectr  Const     Ra                    Dec                   Name
0.5      7.75    8.66      0.91     F6V      Boo        14:45:29.74      +42:22:56.4     HIP 72153
 
The biggest challenge was to obtain imaging using a focal length of 1500mm. The image was recorded on 2 pixels, 1x2 size. Below is the result of stacking 200 frames from 10,000. For stacking such demanding imaging I used the old reliable program, Registax 5.
 
Imaging parameters: 320/1500 / 2.9µ - HIP 72153 - 0.5 ''
 
Boo-HIP-72153-320-1500-hor-ver-dia.png
 
A pair of stars arranged on the camera pixels horizontally - vertically - diagonal, stack of 200 best frames from 10,000.
 
I also present short fragments of AVI - HOR - VER - DIA
 
As you can see, for my set: Telescope 305/1500 + 2.9µ pixel camera (ASI290MM-C) native focal length (1500mm) does not guarantee capture possible information. Two stars with a separation of 0.5 "are glued into a dash of 2x1 pixels. As we will see soon, this does not have to be the case, because you can get a 3x1 pixel detail, light pixel - dark pixel - light pixel. For this purpose a focal length of 2000mm is necessary, which can be obtained using a Barlow lens with a 1.3x parameter.
 
Imaging parameters: 320/2000 / 2.9µ - HIP 72153 - 0.5 ''
 
Boo-HIP-72153-320-1500-SONY1,3.png

 

The distance between the centers of the stars is 1 pixel, so you can see that we have obtained information that due to too low sampling resolution was lost in the previous sample.
A higher Barlow lens (2.25x) provides us with imaging with an even higher sampling frequency, the gap between stars is 2 pixels.
 
Imaging parameters: 320/3375 / 2.9µ - HIP 72153 - 0.5 ''
 
Boo-HIP-72153-320-1500-Hyperion2,25.png
 
The Barlow lens with an even greater multiplicity (2.7x) makes the break even clearer, but no new information is revealed.
 
Imaging parameters: 320/4000 / 2.9µ - HIP 72153 - 0.5 ''
 
Boo-HIP-72153-320-1500-SONY2,7.png
 
In this experiment, the first three steps are the most interesting.
 
In conclusion, it is still difficult to determine where undersampling and oversampling meet, but it is obvious that in my set, in order not to lose data clearly, I have to use a minimum focal length of 2000mm.
 
Table / summary for 320/1500 optics + 2,9µ pixel, for different resolution criteria, and for different wavelengths of light.
 
Tab-320-1500-Ang.png

 

By the way…

An interesting phenomenon is the rainbow drawn by the rings of Airy's disk.
Observation based on the example of a star: HIP 72153 - 0.5 ''
 
Boo-HIP-72153-320-1500-SONY2,7-SLOAN.png
 
The effect is due to the fact that different wavelengths of light generate rings of different sizes.
Imaging was done with SLOAN filters, because they provide us with a greater difference in the wavelength range than those with RGB filters.
 
All images represent 800% of the native data.


Edited by GA-HAMAL, 12 September 2019 - 02:47 PM.

  • Stelios, Gregory, ManuelJ and 9 others like this

#2 exaxe

exaxe

    Ranger 4

  • -----
  • Posts: 399
  • Joined: 14 Nov 2012
  • Loc: france

Posted 12 September 2019 - 03:08 PM

waytogo.gifwhat a job!



#3 psandelle

psandelle

    Mercury-Atlas

  • *****
  • Posts: 2854
  • Joined: 18 Jun 2008
  • Loc: Los Angeles

Posted 12 September 2019 - 05:13 PM

bow.gif How cool is that! Nicely done.

 

Paul



#4 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8821
  • Joined: 12 Oct 2007

Posted 12 September 2019 - 05:42 PM

Hi ga-hamal-

 

This is a nice presentation of a complete end-to-end view of raw results when imaging a double star.  It includes atmosphere, sensor oddities, stacking errors - and sensor noise.  And in the end you don't resample the result - you just look at the pixels and assess what info you can derive.  I think you're saying that smaller pixels have a benefit and it's not clear where things get worse - and that is what I would expect.  As long as you are simply looking at the pixels and using your eyes to interpret the picture - Nyquist doesn't apply directly - because with Nyquist sampling you take discrete samples and then low pass filter to a continuous representation.  In that case you can recover the exact true signal with no error at all.  But you aren't doing that here and imagers never do that.  They take an image and look at the pixels - and errors are introduce throughout on the pixel scale.

 

One subtlety that also comes up in many ways is the use of Rayleigh and other criteria for assessing resolution.  It's important to realize they are just criteria set as a ballpark value based on some resolution heuristic - and they don't represent a fundamental limit or brick wall that can't be passed.  The same applies for 1/4 wave wavefront error, 80% Strehl, and other things.

 

The Nyquist theorem is very different - and it says that beyond a certain fineness of sampling - you get no gain at all because you have all the info you need to recover the exact signal.  But again - that doesn't apply when you just leave the pixels as they are - and registration errors and sensor artifacts play a role.

 

For the particular case of measuring double stars, I would assume it is an image of a double star and then use a model fit to determine the separation, PA, and magnitudes - and compare the result with ground truth professional measurements.  In that case you will always have some estimate of the double star parameters, even when the separation is way below the Sparrow criterion.  But with better optics and smaller pixels you will have a more accurate measure of the separation and other parameters.  And in that model fit you will have uncertainties to let you know how confident you are in the result.

 

Note that in that case, even a small amount of astigmatism or alignment error would skew the result - but that has nothing to do with pixels or Nyquist.  It has to do with the assumption that your PSF is nice and round.  You could include uncertainty in that roundness and stacking errors also - if you can assess it somehow - and that would increase the error bars on your estimates.

 

So it would be interesting to use your data and do model fits - to see the accuracy and uncertainties as a function of pixel size.

 

But also note that the state of the art for measuring double stars is a form of speckle interferometry - where you don't align and stack the images in the first place.  As long as you have enough signal in each exposure you will have a benefit with smaller pixels.  The real trade off becomes SNR vs. pixel size - which again has nothing to do with Sparrow or Nyquist.

 

Your final image with colors is nice - and even if you skip the modeling aspect - to me it would be more interesting and compelling if the pixels were 1/4 size or so.  This isn't a criticism - it's just saying the blockiness of the image detracts from what it's showing.  In terms of the benefits of smaller pixels when presenting an image on the scale of the pixels - it's as simple as that.

 

Frank


  • GA-HAMAL likes this

#5 GA-HAMAL

GA-HAMAL

    Messenger

  • -----
  • topic starter
  • Posts: 455
  • Joined: 18 Aug 2014
  • Loc: Poland

Posted 13 September 2019 - 01:52 AM

This study is not about separating the double star, it is only a tool to illustrate the issue.

 

Lepszym celem do przetestowania będą dwie blisko rozmieszczone gwiazdy na tle mgławicy, wówczas zachowują się bardziej jak szczegół obrazu, a nie dwa niezależne punkty, odsłaniające wszystkie optyczne aspekty dużej skali.
Jak napisałem, pierwsze trzy kroki są najciekawsze, chociaż w rzeczywistości pierwsze dwa kroki są najciekawsze, ponieważ trudno jest udowodnić wyższość kroku 3 nad krokiem 2 lub krokiem 4 nad krokiem 3, ale jest bez wątpienia ten krok 2, dzięki wyższej częstotliwości próbkowania, wznosi się ponad krok 1.

Podwójna gwiazda, znacznie wyraźniejsza niż inne metody, ujawnia zbyt duży piksel lub zbyt małą ogniskową obrazowania, być może nie jest to najlepsza forma, ale pośrednio jest to transfer teorii, ze zbyt niskim lub zbyt wysokim próbkowaniem.
 

Twój ostateczny obraz z kolorami jest fajny - i nawet jeśli pominiesz aspekt modelowania - dla mnie byłoby bardziej interesujące i przekonujące, gdyby piksele miały wielkość około 1/4.

biggrin.png 
 
Boo-HIP-72153-320-1500-SONY2,7-SLOANx.pn
 
Żart, zmień rozmiar 800% smile.png
 
Co dalej? Księżyc? A może jakieś inne podejście do problemu?


Edited by GA-HAMAL, 13 September 2019 - 02:51 AM.


#6 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8821
  • Joined: 12 Oct 2007

Posted 13 September 2019 - 02:51 AM

Hi-

 

Well resizing to 800% does look nicer - and I realize you are doing it somewhat in jest.  But people may think that the lower res data has all the info needed to create a true high res version - by interpolation.  The problem is that simply interpolating to a finer scale isn't what the Nyquist theorem is talking about.  Nyquist involves a very specific form of filtering on an array of original point samples - and recreates the original continuous signal exactly.  If you simply interpolate a finer version based on pixel area measurements it may look nice - but it doesn't have the exactness that is so specific to Nyquist.

 

For double stars, and in thinking in terms of the core "information" in the image - I would do a model fit and see how the double star parameters depend on pixel size.  The visual impression of how well "split" the stars appear doesn't really capture the inherent information that is in the image.  But a model fit will do that.  And I expect that as long as you have a sufficient number of frames for analysis - there will be continued improvement beyond what Nyquist would predict.

 

The main thing is - Nyquist assumes no noise in the measurement, and a very specific form of post-processing to create a continuous version of the discrete samples.  Imaging with noisy sensors and pixel scale artifacts - and then just looking at the pixels - is completely different.

 

Frank



#7 GA-HAMAL

GA-HAMAL

    Messenger

  • -----
  • topic starter
  • Posts: 455
  • Joined: 18 Aug 2014
  • Loc: Poland

Posted 13 September 2019 - 03:11 AM

Well resizing to 800% does look nicer - and I realize you are doing it somewhat in jest.  But people may think that the lower res data has all the info needed to create a true high res version - by interpolation.  The problem is that simply interpolating to a finer scale isn't what the Nyquist theorem is talking about.  Nyquist involves a very specific form of filtering on an array of original point samples - and recreates the original continuous signal exactly.  If you simply interpolate a finer version based on pixel area measurements it may look nice - but it doesn't have the exactness that is so specific to Nyquist.

That is why I wrote that it is a joke, it is obvious that you need another imaging with a focal length of 9000 mm, to satisfy your argument smile.png
 

The main thing is - Nyquist assumes no noise in the measurement, and a very specific form of post-processing to create a continuous version of the discrete samples.  Imaging with noisy sensors and pixel scale artifacts - and then just looking at the pixels - is completely different.


Does the search for pixel size or telescope focal length, which are sufficient to sample with appropriate image resolution, in your opinion does not contain the idea of Nyquist theorem?



#8 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8821
  • Joined: 12 Oct 2007

Posted 13 September 2019 - 05:30 PM

That is why I wrote that it is a joke, it is obvious that you need another imaging with a focal length of 9000 mm, to satisfy your argument smile.png
 


Does the search for pixel size or telescope focal length, which are sufficient to sample with appropriate image resolution, in your opinion does not contain the idea of Nyquist theorem?

I mainly try to make the point that in many cases there may be a theoretical basis for something being optimal - or as good as it possibly can be - but real world factors play a role not included in the theory.  So theory only serves as a guide and you need to be aware of all the assumptions in the theory that don't hold up in practice.

 

In Nyquist theory everything is very simple and well defined.  You don't need to talk vaguely about sampling enough to capture "the information" in an image because by only sampling a few points you can recover the full image exactly - if all assumptions hold and you do all the right things with the samples.

 

But you aren't doing that with double stars.  You are taking many images and stacking them - and looking at them and visually assessing if the split is cleaner or not.  If your only goal is something that looks better to your eye - then I guess all you can do is try different pixel sizes and see what works best.  But as long as you have enough signal I don't know why you wouldn't find continued improvement with smaller and smaller pixels.  There is no sudden point, in reality, where things stop improving - contrary to Nyquist, which only applies in theory.

 

My other point is that all this stuff can be quantified if you do a model fit - and calculate the parameters for the double star separation and other values.  If you also calculate error bars on those values, then you really do have a basis for empirically finding an optimum pixel size for measuring double stars.  That should capture all the funny business going on in terms of noise, registration error - and sensor artifacts.  But it will probably depend on the brightness of the stars and what exposure you need to use - among other things.

 

Another way to get a view of how this stuff works in practice is to see what pixel size the planetary imagers use.  They aren't just going by Nyquist assumptions - they are competing with each other to get the best results possible.  The planets they are dealing with require relatively long exposures - but even so I think they benefit from very highly oversampled imaging.

 

Frank

 

Addendum:  To put this in concrete terms - some planetary imagers are using a C9.25 with 3x barlow and ASI290 camera with 2.9um pixels.  That configuration would have diffraction limited fwhm of 0.46" and the pixels would be 0.085 arc-sec.  That is a factor of 5.4 smaller than the fwhm - and much finer than a typical Nyquist factor of 2.  In addition, this is referenced to the fwhm and not the diameter of the Airy ring - which is sometimes used in these discussions.  The Airy ring is about 1" in diameter - so Nyquist would "say" there is no benefit going beyond 0.5" per pixel.  But that is wrong on many counts - by a factor of about 6.


Edited by freestar8n, 13 September 2019 - 06:15 PM.

  • R Botero and David-LR like this

#9 Shiraz

Shiraz

    Viking 1

  • -----
  • Posts: 605
  • Joined: 15 Oct 2010
  • Loc: South Australia

Posted 14 September 2019 - 06:39 AM

Very nice work - a really good demo of Nyquist theory working out in practice. Using lucky imaging techniques to get the atmospheric blurring out of the way is a clever approach to remove the major unknown.

 

A back of the envelope calculation suggests that your four images have about 0.9, 1.2, 2 and 2.3 pixels across a resolution element (the diffraction FWHM). The first two images are clearly undersampled, but you state that no new information is revealed going from the 2 pixel to the 2.3 pixel results, which is consistent with the Nyquist approach. Your 2.3 result looks to me to be just slightly better resolved than at 2 and, if this is the case, it matches pretty well with the generally accepted 2.5-3 pixels per FWHM required to take into account the effects of band-limit and sampling approximations that affect the application of Nyquist theory to real imaging systems. 

 

Of course, with long exposure imaging, the atmosphere will determine the (much larger) FWHM and much shorter focal lengths would be required to get to 2.5 - 3 pixels per FWHM as suggested by Nyquist theory.

 

Nice result. Thanks for testing the Nyquist theory so neatly and posting the results. Cheers Ray.


Edited by Shiraz, 14 September 2019 - 04:12 PM.

  • Jon Rista and GA-HAMAL like this

#10 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • Posts: 157
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 15 September 2019 - 11:38 AM

In principle there is no limit to how fine a resolution that could be obtained for double stars with smaller and smaller pixels (not by eye!).  Nyquist sampling does not prevent this.  If you KNOW that what you are dealing with is just 2 unresolved point sources and you KNOW the telescope/camera/atmospheric seeing point spread function PSF exactly, you can perform a least-squares best fit for 2 separated point sources and estimate the separation.  If you also have a high enough SNR and small enough pixels you can get a correct answer much better than lambda/D.  If it turns out that there really are 3 or more stars present, then you will get the wrong answer.

 

Professional astronomers calculate star centroid positions with much better precision, than lambda/D all the time!  They know (or guess) that they are dealing with a single unresolved object.

 

The key idea here is that knowing that your object is just 2 unresolved point sources, lets you go past the limits imposed by Nyquist sampling.

 

Of course we never have super high SNR and atmospheric turbulence limits how well we can KNOW the system PSF.  So in practice there are limits, but the limits are not a result of Nyquist sampling theory.

 

Bob



#11 Shiraz

Shiraz

    Viking 1

  • -----
  • Posts: 605
  • Joined: 15 Oct 2010
  • Loc: South Australia

Posted 15 September 2019 - 11:14 PM

In principle there is no limit to how fine a resolution that could be obtained for double stars with smaller and smaller pixels (not by eye!).  Nyquist sampling does not prevent this.  If you KNOW that what you are dealing with is just 2 unresolved point sources and you KNOW the telescope/camera/atmospheric seeing point spread function PSF exactly, you can perform a least-squares best fit for 2 separated point sources and estimate the separation.  If you also have a high enough SNR and small enough pixels you can get a correct answer much better than lambda/D.  If it turns out that there really are 3 or more stars present, then you will get the wrong answer.

 

Professional astronomers calculate star centroid positions with much better precision, than lambda/D all the time!  They know (or guess) that they are dealing with a single unresolved object.

 

The key idea here is that knowing that your object is just 2 unresolved point sources, lets you go past the limits imposed by Nyquist sampling.

 

Of course we never have super high SNR and atmospheric turbulence limits how well we can KNOW the system PSF.  So in practice there are limits, but the limits are not a result of Nyquist sampling theory.

 

Bob

hi Bob

 

Agree that astrometry can be carried out to much higher resolution, but the images that are used for precise sub-pixel astrometry are generally also sampled using a Nyquist approach. High astrometric precision does not require finer-than-Nyquist sampling, since the centroid of such a PSF can be accurately located to a small fraction of a pixel (as you say, provided that the PSF is well understood and SNR is OK). Eg the Hubble WFC3 samples at roughly 1.7-2.2 pixels per FWHM (ie close to Nyquist), but, with PSF fitting, can apparently provide precision astrometry down to ~1/100 of the size of one of those pixels. 

 

Amateur astro imaging deals with "what does it look like" and Nyquist theory gives guidance on how to sample in order to show the maximum available detail, given the instrument and atmospheric characteristics. The OP shows that around 2+ pixels per FWHM gives the max useful info on the nature of the chosen scene - which is just what Nyquist's theory would suggest. It is a neat validation of the theory applied in our environment.

 

Cheers Ray


Edited by Shiraz, 16 September 2019 - 01:23 AM.


#12 Camissa

Camissa

    Lift Off

  • -----
  • Posts: 22
  • Joined: 13 Jul 2019

Posted 16 September 2019 - 12:52 AM

Hi Ga-Hamal,

 

very interesting presentation. Thank you very much for showing this and the rainbow Airy disk is just beautiful - both as an image and as a nice demonstration of the nature of light waves. 

 

Frank, thank you for the additional information. 

 

Clear skies,

Ecki



#13 GA-HAMAL

GA-HAMAL

    Messenger

  • -----
  • topic starter
  • Posts: 455
  • Joined: 18 Aug 2014
  • Loc: Poland

Posted 17 September 2019 - 03:54 AM

Thank you all.
We can argue about substantive matters, but thank you for recognizing my efforts in this field smile.png



#14 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8821
  • Joined: 12 Oct 2007

Posted 19 September 2019 - 04:02 AM

I've been traveling and unable to reply in detail - but there are some points I would like to clarify.

 

The whole idea of "capturing all the information" is somewhat misleading when we are asked to assess the separation of a given double star.  In that case if all you need to do is estimate the separation - you don't need fine sampling at all, and Nyquist is overkill.

 

The ultimate goal in high res imaging is to determine the true profile of photons landing on the sensor - before they are sampled on the pixel scale.  If you sample with pixels of a given size and you think you 'have all the information' in the image - you should be able to recreate that profile exactly in a continuous form.  And that means explicitly undoing the square sampling that the pixel did.

 

So instead of asking if a double star is split - the test I prefer is to ask whether a given spot of light in an image is a star - or a galaxy.  This is a very real scientific question in analyzing astronomical images - and it benefits from small pixels so you have more information to compare a given "star" profile with the assumed PSF.  

 

One example I have is here, at 0.4" per pixel:

 

get.jpg?insecure

 

As for applying Nyquist as a general 'law' - here is a nice write up of its pitfalls from an engineering perspective.  It doesn't talk about noise or the imaging context where the signal is sampled as pixels rather than points - and there are registration errors in stacking - but it points out many of the practical factors that point to the need for 'oversampling' and why Nyquist doesn't apply in a simple manner:

 

https://www.wescottd...ng/sampling.pdf

 

In the specific context of the double star - where you are given information about the object and asked to find its parameters - an extreme example of that would be if someone presented a sine wave signal and told you its exact frequency - and you are asked to determine the whole continuous signal exactly from a set of measurements.  In that case all you need to find is the amplitude and phase - which can be done with just two measurements placed arbitrarily far apart - even light years.  No need to sample at twice the frequency for a long time.  Just two measurements far apart - if you know the exact frequency a priori.

 

If instead of asking what the separation is, you ask if there are any nearby quasars, or if the stars have faint filaments extending from them - that's where you need to use those blocky pixel measurements to deduce the true continuous signal that was landing on those pixels.  And that is where small pixels help.

 

If additional references are needed I can provide them.  But I'm still puzzled why these concepts don't make sense directly - even though they go against 'what Nyquist says.'

 


 

 

solving problems in sampling and anti-alias filtering is not amenable to rules of thumb, at least not general ones. When you’re solving sampling problems you’re best off working from first principals and solving the problem by analysis.

 

Frank



#15 CygnusBob

CygnusBob

    Vostok 1

  • -----
  • Posts: 157
  • Joined: 30 Jun 2018
  • Loc: Las Vegas, NV

Posted 19 September 2019 - 11:56 AM

There is an MTF benefit for having smaller pixels.  In the spatial frequency domain, the system MTF goes like Hoptical(Fx,Fy) Hdetector(Fx,Fy) Hseeing(Fx,Fy) where Fx Fy are spatial frequencies (like cycles / arc-second).  Now Hoptical is band-limited so that we can Nyquist sample the image so that all of information is in principal captured, however in that pass band of the optics, the detector MTF response is rolling off.  In one dimension it goes as Sin(Pi*Fx*IFOV)/(Pi*Fx*IFOV) where the IFOV is the angular size of a pixel (the plate scale in arc-seconds).  The smaller the pixel is the less MTF rolloff occurs in the optical pass band. 

 

However less photons will be collected per second on the smaller pixels, but there will be more pixels available for a least-squares fit.  So there may be a trade off for what is optimal when you consider things like sky background shot noise and readout noise.

 

Bob


Edited by CygnusBob, 19 September 2019 - 12:14 PM.

  • Shiraz likes this

#16 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8821
  • Joined: 12 Oct 2007

Posted 20 September 2019 - 03:19 AM

There is an MTF benefit for having smaller pixels.  In the spatial frequency domain, the system MTF goes like Hoptical(Fx,Fy) Hdetector(Fx,Fy) Hseeing(Fx,Fy) where Fx Fy are spatial frequencies (like cycles / arc-second).  Now Hoptical is band-limited so that we can Nyquist sample the image so that all of information is in principal captured, however in that pass band of the optics, the detector MTF response is rolling off.  In one dimension it goes as Sin(Pi*Fx*IFOV)/(Pi*Fx*IFOV) where the IFOV is the angular size of a pixel (the plate scale in arc-seconds).  The smaller the pixel is the less MTF rolloff occurs in the optical pass band. 

 

However less photons will be collected per second on the smaller pixels, but there will be more pixels available for a least-squares fit.  So there may be a trade off for what is optimal when you consider things like sky background shot noise and readout noise.

 

Bob

Yes you can fold in the size of the pixels as a separate MTF - and you can also undo the blurring caused by the pixel size if you do a proper deconvolution of the pixel impact - but no one ever does that.  They just look at the pixels - and as a result they see the data as blurred on the scale of the pixels.

 

But the other thing in terms of Nyquist is that the fundamental bandwidth limit is set by diffraction and not by seeing.  Seeing may limit the fwhm of the psf - but the actual shape of the psf includes spatial frequencies out to the diffraction limit.  So for people who take "Nyquist" blindly as a rule - they should sample at twice the frequency set by diffraction - and seeing plays no role in that sampling limit.  That limiting spatial period is lambda*fnum so for an f/7 system and 550nm it would be 3.85um.  And based on 'what Nyquist says' you should cut that in half - and use pixels that are 1.9um.  Even if your seeing is 8". 

 

If you don't sample finely enough, you won't capture the true psf - and you won't be able to tell stars from small galaxies.

 

And once you image with those small pixels, you need to deconvolve the result with a sharpening kernel to remove the impact of the pixel area sampling.  But even that assumes no registration error - so it is still optimistic as a pixel size beyond which you would see no benefit.

 

In some sense sampling at 1/2 the measured fwhm is a minimum acceptable sampling rate - as opposed to optimal.  It's like saying minimum wage is the optimal salary one should strive for.

 

Frank


  • happylimpet likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics