Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Under and Over Sampling

  • Please log in to reply
104 replies to this topic

#1 m1618

m1618

    Viking 1

  • *****
  • Vendors
  • topic starter
  • Posts: 777
  • Joined: 18 Sep 2012
  • Loc: Orange County - CA

Posted 19 March 2016 - 10:44 AM

There was an article in S&T (April Issue) about choosing a camera and it touched upon the topic of "sampling."
Could someone here elaborate on under and over sampling? Any explanation/example of the formulas used as well as "seeing conditions."

Side note:
I sometimes wish for new vocabulary. Anyone else feel counter-intuitive about words like "fast" vs. "slow" and f numbers, zoom magnification vs. eye piece in mm? Then there is sampling.

 

 



#2 pfile

pfile

    Fly Me to the Moon

  • -----
  • Posts: 5129
  • Joined: 14 Jun 2009

Posted 19 March 2016 - 11:52 AM

well, "sampling" comes from information theory - you are sampling an analog phenomenon every so often, taking a reading, and converting the reading to a digital number. there are formulas that show how much of the original signal's frequency components you can recover by sampling at a particular rate. for instance, if you are trying to record an audio waveform that the human ear can hear, sampling the audio at 44.1khz is enough to reconstruct every frequency that your ear might hear, from just those samples. any more than that and you have oversampled the data. any less than that and you have undersampled it and can't reproduce the original waveform to the accuracy needed to "fool" the brain. for a 1 dimensional signal like audio, the data must be sampled at 2x the frequency of the highest frequency you hope to reconstruct.

 

in imaging, where the sensor is a grid of pixels, the same concept applies. if your grid is coarse, you could take an accurate picture of maybe a smooth wall where there's not a lot of texture. if the grid is too fine, you've tried to capture more information than is really there on that smooth wall. the idea is that your telescope can only resolve features of a certain size (based on it's aperture) so you want to match the sensor's pixel grid to the telescope's capability. furthermore the atmosphere smears out details, so that's where the "seeing" comes in. if the atmosphere above you is very stable, then the seeing is good, and your telescope can perform more closely to it's theoretical capability. generally speaking the atmosphere is the limiting factor here, but if you are working with a very short focal length the optics could still be the bottleneck.

 

people measure seeing in angles, so you hear "the seeing is x arcseconds" meaning the sky is smearing the perfect point sources of the image of a star into a smudge that's x arcseconds wide. when you divide your sensor's FOV (in arcseconds) by the number of pixels across, you get an image scale in arc seconds per pixel. if this number much less than your seeing, you're oversampling the sky.

 

rob


  • Tim, psandelle, bilgebay and 15 others like this

#3 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7091
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 19 March 2016 - 12:18 PM

Good job Rob!

John


  • starbob1 likes this

#4 AndrewXnn

AndrewXnn

    Mariner 2

  • -----
  • Posts: 291
  • Joined: 29 Aug 2015
  • Loc: Mexico, NY

Posted 19 March 2016 - 12:19 PM

Take the size of the pixel in micro-meter, multiply by 206 and then divide by the focal length in mm of the optical system.  This will give you arc-sec/pixel.

 

If the value is <1 then the system is over sampling and you will find stars to appear bloated.

If the value is >2, then the system is under sampling and the stars will appear to be blocks.

 

For example, take a camera with 8.5 um pixel.  With a 1000 mm focal length will give you 1.75 arc-sec/pixel of sampling.  That's about right.

 

However, if you were to use the same camera shooting at 2000 mm focal length will give you 0.88 arc-sec/pixel.  That's over sampling and will results in stars being bloated.


  • m1618 and Hawkdl2 like this

#5 Rick J

Rick J

    Cosmos

  • *****
  • In Memoriam
  • Posts: 8376
  • Joined: 01 Mar 2008
  • Loc: Mantrap Lake, MN

Posted 19 March 2016 - 12:37 PM

To me there's a lot wrong with that article including too much emphasis on f ratio and sampling.  It all depends on your purpose.  Many an APOD has been taken with severe undersampling for instance.  While oversampling can increase sub exposure time due to read noise issues it doesn't increase total exposure time (longer subs due put pressure on inadequate mounts to track long enough).  I routinely oversample because sometimes seeing suddenly improves and if I'd have matched the camera and scope for typical seeing I'd have lost the high detail a night of unusually good seeing would provide.  I can always downsample the final image and end up where I'd have been if taken there in the first place.  Only penalty is longer sub exposure times.

 

The article keeps making a big deal about faster scopes being better especially if your camera has low sensitivity.  Odd but I use one of the least sensitive chips out there (50% max) and run at f/10 yet need hours LESS exposure time than many.  Directly contradicting his article.  It isn't the f ratio itself that controls exposure time but your aperture and image scale.  An 8" f/10 scope with a 9 micron pixel camera of say 70% QE is exactly the same as an 8" f/5 using a 4.5 micron camera at 70% QE.  One isn't 4 times faster than the other at all.  The f/5 scope is only faster if you give up the resolution and use that same 9 micron camera which in turn is undersampling the image.  If the wide field this gives is your goal then this is GOOD, if resolution of a small galaxy was your goal it is BAD.  So it comes back to what is your purpose.  Something mostly lost in that article though addressed in passing a few times.

 

A good illustration of this is a photo Stan Moore (author of CCDStack) took at 8.2" aperture at 10 minutes at the same image scale of NGC 206 in M31 but at f/12.4 and f/3.9.  Here's the link. The thing is an 8.2" mirror collects the same number of photons no matter what its f ratio.  It's what you do with them after they are collected that makes the difference.

 

All systems are a maze of compromises.  You have to decide what your goals are then decide what compromises you can make to achieve those goals.  I wanted high resolution.  My typical skies are about 2.5" seeing.  Testing cameras I found I needed a pixel about 1/3 (not 1/2 as I had so often read) that size or about 0.8" to capture my typical seeing.  But it can get down to 1.5" on rare nights and I wanted to be ready for that so I set my sampling size at 0.5".  That in turn meant 14" aperture to keep exposure times reasonably short.  Since I've been doing astrophotography since 1955 I post over in the CCD Imaging and Processing forum, usually about one every other day.

 

Here's a 70 minute shot of NGC 4565 taken using only 40 minutes of luminance data (color data adds color but doesn't increase exposure time or signal to noise ratio (its the latter that's important).  It was taken at f/10 and severely over sampled by what most would define it.  It was taken at 0.5" per pixel and is displayed at 0.67" as that's what best matched that night's seeing of about 2" of arc.

 

Rick

Attached Thumbnails

  • NGC4565L4X10RGB1X10X3R1-CROP150.JPG

  • starbob1, psandelle, George Simon and 13 others like this

#6 pfile

pfile

    Fly Me to the Moon

  • -----
  • Posts: 5129
  • Joined: 14 Jun 2009

Posted 19 March 2016 - 12:40 PM

thx john, if we can see it is because we stand on claude shannon's shoulders (to remix an aphorism :) )

 

rob

 


  • rainycityastro likes this

#7 David Ault

David Ault

    Gemini

  • -----
  • Posts: 3424
  • Joined: 25 Sep 2010
  • Loc: Georgetown, TX

Posted 19 March 2016 - 12:41 PM

Rob did an excellent job of describing what sampling is.  I'll add a couple other comments though.

 

Depending on how severe you are undersampling the majority of the star's energy distribution (called the point spread function or PSF) may fit inside a single pixel.  This will give you 'square stars' for a lot of the fainter stars.  Usually wide field exposures have this effect but in general the small stars are lost in the greater field of view so it does not detract from the image.  Undersampling can be improved some by techniques like dithering/drizzling but at the cost of making the signal noisier as well requiring significantly more memory and disk space for the image data.

 

Oversampling isn't necessarily a problem either.  You will end up with images where stars cover several pixels.  Some prefer the appearance of images like this.  It can also make processing an image a bit easier.

 

The entire system is important not just the focal ratio of a scope.  Ultimately the environmental conditions (seeing, tracking, etc.), aperture, focal length, pixel size and camera characteristics all combined determine how quickly you will achieve a given signal to noise ratio (SNR - a measure of how noisy your image will look), how well resolved the image will be and what your field of view is.

 

There's a lot of math behind this sort of analysis but it isn't absolutely necessary to understand it all, although I encourage people to try.  What is really important is what equipment you already have and what objects you are interested in capturing.  Now if your goal is to capture the smallest structures possible for your given area then I would tend towards over sampling.  If you want to capture wide field objects, don't worry about the sampling, just target the field of view that captures what you want.

 

EDIT: Rick makes some excellent points as well.  His comment about sampling at 1/3rd the seeing is derived from the fact that we are not sampling a one dimensional structure.  We have to take the diagonal inbetween pixels into account.  Since we are typically using square pixels the ratio of the diagonal to one of the square edges is SQRT(2) or 1.414...  Since sampling theory requires twice the sampling of the highest frequency element this really becomes 2.828 times or very close to 1/3rd of the seeing (or diffraction of your optics whichever is the limiter).

 

Regards,

David


Edited by David Ault, 19 March 2016 - 12:48 PM.

  • Footbag, pfile, jdupton and 3 others like this

#8 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 16795
  • Joined: 27 Oct 2014

Posted 19 March 2016 - 01:56 PM

Good stuff above.  I'll note that there is an aspect of personal values that creeps in.  Some may prefer (since it's not likely your sampling will be perfect), to err on the oversampling side, some on the under.

 

People who have very high standards for star roundness (or who "pixel peep", ie magnify the image and look at individual stars), will prefer to err on the side of oversampling.  But others will prefer to err on the side of undersampling, which gives a bit better signal to noise ratio, and arguably, a better image sometimes.  It can be debated whether the seeing to sampling ratio is 1 to 2 or 1 to 3 or something in between.  The theoretical arguments often seen can oversimplify the totality of the situation, which is, as usual in AP, very complex.

 

It also is somewhat target sensitive.  A small galaxy or planetary nebula can benefit from pushing sampling below 1.  While big diffuse targets like nebulae can look fine at 2,3, or even 4.

 

The bottom line is that hard and fast rules should not be followed slavishly.  1-2 is usually good.  But you'll need to decide for yourself.  Things like binning or focal reducers give you flexibility.

 

Like many things in AP, it's possible to overemphasize theory, and lose track of the important thing, the image.


  • psandelle likes this

#9 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 16795
  • Joined: 27 Oct 2014

Posted 19 March 2016 - 02:00 PM

To me there's a lot wrong with that article including too much emphasis on f ratio and sampling.

I agree that the article oversimplifies some stuff, and there is personal opinion involved.

 

It still is the best introduction for a beginner to cameras I've ever seen.  Most experienced imagers have gone past it, most prospective imagers would find it very valuable.



#10 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23586
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 March 2016 - 02:33 PM

I found an excellent article on this subject almost two years ago, from Stan Moore:

 

http://www.stanmoore.../pixel_size.htm

 

The thing in this article that most caught my eye was this:

 

There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution.  The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM.  But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave.  The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels.  But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels.

 

 

So many discussions about sampling only consider a single axis, as though we were discussing 1-dimensional audio signals. I think it is critical that we not forget that with imaging, we are working with 2-dimensional signals, or spatial frequencies. You need more than 2x2 pixels to decently resolve a star. The nyquist limit in two dimensions for spatial frequencies is 3.3x3.3 pixels, which given the integral nature of pixels, means you really want to resolve each star with 4x4 pixels. Sampling beyond 4x4 pixels continues to improve the accuracy of the star:

 

yG7tiDe.jpg


  • happylimpet and HockeyGuy like this

#11 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7091
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 19 March 2016 - 02:45 PM

This is digging a lot deeper that what the OP asked about and there are some good points here; however, I want to clarify a couple of things.  Remember that sampling theory applies to a band-limited signal.  The real band-limit in an optical system is not determined by the size of an Airy disk.  It is determined by the optical transfer function where max spatial freq = 1/(lambda*F/#).  I agree that sampling across an Airy disk at 3x will certainly give a better looking round image than a sampling rate 2x but sampling at say 10x will do nothing other than to reduce signal.   Rick's statement that, "An 8" f/10 scope with a 9 micron pixel camera of say 70% QE is exactly the same as an 8" f/5 using a 4.5 micron camera at 70% QE" is totally true.  That is because the irradiance of the signal in the focal plane varies with the inverse square of the focal ratio.  Just remember that if you use that same 4.5 micron sensor to over-sample an image on that 8", F/10 scope, you'll reduce the signal by a factor of four (1/4) compared to the signal on the F/5 scope and that means longer exposures to reach the same ADC count.  So, I want to clarify that signal strength does trade against sampling rate.  The better the sampling, the lower the signal.

 

For a point source, the irradiance of the Airy pattern scales as the square of the ratio of the diameter to the focal length.  Going from 6” to 12” in diameter will increase the irradiance by four times.  Going from a 6”, F/8 to a 12”, F/4 will increase the peak irradiance by 16 times.  If you have the luxury of precisely matching your sensor size to whatever scope you want to use, you can indeed get the same signal from each configuration.  However, if you are like most of us with a single camera, it is important to understand that for a stellar image, the irradiance will vary with the aperture and the focal ratio.  

 

When it comes to an extended source (like a nebula,) aperture makes absolutely no difference:  F/# is the only parameter that counts for determining the irradiance in the image plane.  Yep, that means that your 80 mm Apo will produce the same image irradiance as an 8 m scope at the same F/# for an extended source.  The big difference, of course, will be in image scale but that doesn't change the underlying fact that signal strength always trades against sampling rate.

 

 

John


Edited by jhayes_tucson, 19 March 2016 - 02:49 PM.

  • psandelle, David Ault and HockeyGuy like this

#12 Rick J

Rick J

    Cosmos

  • *****
  • In Memoriam
  • Posts: 8376
  • Joined: 01 Mar 2008
  • Loc: Mantrap Lake, MN

Posted 19 March 2016 - 02:50 PM

Oddly Astronomy Magazine ran a similar article the same month on selecting a planetary camera.  I thought it did a much better job. 

 

Some of the S&T article's issues were due to space limitations forcing over simplification.  Still he pushed the fast scope without ever mentioning you could be giving up a lot of potential resolution.  Not a problem for wide field work which is a good starting place for beginners but at least that should have been fixed.  Sean Walker who used to edit such articles would have insisted on it from emails we've exchanged when he was imaging editor but he no longer edits those unfortunately.  Also since that was published I've gotten several emails accusing me of lying about my exposure times and citing the article as "proof".  That tells me it missed the mark badly with some folk.

 

He did note that you don't save imaging time with OSC cameras as many believe.  I was happy to see that mentioned.  He did get that myth right that so many miss.  Go OSC to save money and simplify processing (if your not too picky that is) but it won't save imaging time for the same signal to noise ratio and resolution.  Many mono imagers are after very high signal to noise ratio and put in a ton of time to achieve that.  Most OSC users are after fast images and think these long times are due to using a mono camera.  When they'd need even more time with OSC to get the same signal noise ratio some mono imagers are after.

 

Rick



#13 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23586
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 March 2016 - 03:51 PM

John, you are absolutely right that increasing sampling reduces signal. It is a tradeoff. I am not advocating sampling at 10x, however when I sample at 2x or 1x or 1/2x, I always have clipped stars. Sampling at 3-5x, and I no longer have clipped stars. Of course, I am also using a scope with an 8" (203mm) aperture when I sample that much, so I'm gathering more light to start with, which tends to balance things out a bit. I get better sampling, no clipping, but I haven't lost nearly as much signal as if I wasn't using a larger aperture. 



#14 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8742
  • Joined: 12 Oct 2007

Posted 19 March 2016 - 05:54 PM

There was an article in S&T (April Issue) about choosing a camera and it touched upon the topic of "sampling."
Could someone here elaborate on under and over sampling? Any explanation/example of the formulas used as well as "seeing conditions."


I haven't seen the article but I have seen many web discussions of how this stuff works - and as usual it involves theorems and simulations - while avoiding the real world aspects of imaging that dominate the true tradeoffs of small vs. large pixels.

The Nyquist theorem is often applied in situations like these - and it really has no place except as a rough guide.

The tradeoffs of large vs. small pixels in astro-imaging can be stated simply: If you care mainly about resolution in small objects, use small pixels. If you care about capturing a wide field scene - don't worry about pixel size and just make sure the objects fit the field for the given OTA and sensor size.

Actual images consist of many frames that have been stacked and slightly shifted after alignment - which makes it much more complicated than simple Nyquist arguments. And after stacking the images are processed - and over-sampling will have a benefit in the size and appearance of stars.

In addition, as you over-sample more, you are able to focus better and align the stacks better. This means that the true fwhm of stars in your final image will get smaller as you sub-sample more - and there is no sudden point where it stops. It is asymptotic.

It is also hard to talk about the impact on "SNR" because the pixels are changing size. If two sensors have the same size and sensitivity - they will record exactly the same amount of signal regardless of pixel size - but the impact of read noise will be different. If you expose long enough - read noise won't matter either way.

I think the main thing in choosing a camera is to know what kind of imaging you want to do. If you are looking for high res of small galaxies - get a smaller sensor with small pixels. If you want wide field then don't worry as much about resolution because the images won't be shown in a way to reveal fine detail anyway.

Frank
  • WesC likes this

#15 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7091
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 19 March 2016 - 05:56 PM

John, you are absolutely right that increasing sampling reduces signal. It is a tradeoff. I am not advocating sampling at 10x, however when I sample at 2x or 1x or 1/2x, I always have clipped stars. Sampling at 3-5x, and I no longer have clipped stars. Of course, I am also using a scope with an 8" (203mm) aperture when I sample that much, so I'm gathering more light to start with, which tends to balance things out a bit. I get better sampling, no clipping, but I haven't lost nearly as much signal as if I wasn't using a larger aperture. 

 

 

Jon,

Just remember that while you aren't clipping the bright stars, you are also not getting as much signal in the faint areas and that's usually what counts.  Just be be clear, I'm not disagreeing with 3x sampling across the Airy disk (or better yet the seeing disk for scopes >~10".)    I picked 10x as an extreme example to illustrate the point.  I've noticed that there are folks who believe that using a camera with  sub-4 micron pixels on a large (>10") F/10 system is a good thing for resolution and that's not always the case.  Not only is the resolution not enhanced but the signal goes down quite a bit.  

 

On my C14, the efl is about 3850 mm which gives a 1 arc-sec disk of about 19 microns, which with PSF smear, will give a spot size of somewhere around 30 microns (probably a bit less.)  That's what allows a 9 micron pixel to provide pretty good sampling with large, slow telescopes.  An 8", F/5 telescope only has a 10 micron 1 arc-second disk with a seeing disk of about 15 microns so a 5 micron pixel will be a better match.  If you use that same 5 micron sensor on the C14, you'll gather signal at almost 1/4 the rate as on the F/5 scope (for an extended source) and it will be way oversampled.

 

John



#16 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8742
  • Joined: 12 Oct 2007

Posted 19 March 2016 - 06:40 PM

I don't understand why the Airy disk keeps coming up in discussions of deep sky imaging. For most situations except very small refractors, the star spots are much larger than the Airy pattern. It is also somewhat misleading to talk of a single fwhm for "seeing." In the same skies one system may get 3" fwhm and another will get 1.5". The fwhm you get is a moving target and depends on seeing, optics, guiding, focus, and pixel size.

If you have a given setup that gives a certain fwhm and is reasonably well sampled - and IF you are aiming for highest resolution in the stacked images - I would not go smaller than perhaps 1/6 of that fwhm - but I would definitely go smaller than 1/3. And if you don't care much about resolution but just want to capture faint detail in wide shots quickly - I would allow undersampling to happen as long as the stars don't look odd and blocky.

But if you are sub-sampling for max detail, read noise may be more of a factor and you will need longer sub-exposures and total exposures for the image to look nice. But if your pixel size is limiting the final resolution - which can happen even if they are "Nyquist" size - you will lose resolution you could otherwise have obtained and no amount of exposure will recover it.

Frank
  • bobzeq25 likes this

#17 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7091
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 19 March 2016 - 08:39 PM

Agreed, though it should be pointed out that dithering with drizzle stacking can improve spatial resolution with a critically (or under sampled) sensor.

 

John



#18 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23586
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 March 2016 - 09:00 PM

 

John, you are absolutely right that increasing sampling reduces signal. It is a tradeoff. I am not advocating sampling at 10x, however when I sample at 2x or 1x or 1/2x, I always have clipped stars. Sampling at 3-5x, and I no longer have clipped stars. Of course, I am also using a scope with an 8" (203mm) aperture when I sample that much, so I'm gathering more light to start with, which tends to balance things out a bit. I get better sampling, no clipping, but I haven't lost nearly as much signal as if I wasn't using a larger aperture. 

 

 

Jon,

Just remember that while you aren't clipping the bright stars, you are also not getting as much signal in the faint areas and that's usually what counts.  Just be be clear, I'm not disagreeing with 3x sampling across the Airy disk (or better yet the seeing disk for scopes >~10".)    I picked 10x as an extreme example to illustrate the point.  I've noticed that there are folks who believe that using a camera with  sub-4 micron pixels on a large (>10") F/10 system is a good thing for resolution and that's not always the case.  Not only is the resolution not enhanced but the signal goes down quite a bit.  

 

On my C14, the efl is about 3850 mm which gives a 1 arc-sec disk of about 19 microns, which with PSF smear, will give a spot size of somewhere around 30 microns (probably a bit less.)  That's what allows a 9 micron pixel to provide pretty good sampling with large, slow telescopes.  An 8", F/5 telescope only has a 10 micron 1 arc-second disk with a seeing disk of about 15 microns so a 5 micron pixel will be a better match.  If you use that same 5 micron sensor on the C14, you'll gather signal at almost 1/4 the rate as on the F/5 scope (for an extended source) and it will be way oversampled.

 

John

 

 

I understand the need for large pixels at very long focal lengths. I agree that using small pixels (<5 microns) with an f/10 system is NOT going to produce useful results with reasonable exposure times. 

 

I dunno about not getting faint signal with what I like to call "well sampled" optics, which is around that 3-5x sampling range. I tend to reduce ISO and increase exposure time when I start imaging at a higher resolution. The last time I imaged with my 8" RC, I was using 20 minute ISO 400 subs, and I had plenty of faint signal:

 

qN5uS1q.jpg

 

This is with an unmodded 5D III. (Sorry for the field structure...this was when I only had the TSOAG9, which left me with severe field structure issues, and I eventually gave up trying to fix it all.) This is actually 14 hours of integration, I think these were unfiltered, so the extra noise from skyfog was pretty high. I actually was overexposed here, definitely skyfog limited, the brighter stars clipped, but most of the other stars did not. In this case, my seeing was probably around 4" or so, and my pixel size is 6.25 microns. So I was sampled around 5x here. This is with an f/8 system. I was not using tiny pixels, but neither was I using 9 micron pixels...and I honestly don't have a problem with the sampling I was getting. Plenty of signal in the "faint stuff"...which would in particular be Ha for the unmodded 5D III.


Edited by Jon Rista, 19 March 2016 - 09:01 PM.


#19 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7091
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 19 March 2016 - 09:04 PM

Jon,

Understood.  I said, "not as much" and I wasn't trying to imply that you won't get faint signal.  It's all relative...

John



#20 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23586
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 March 2016 - 10:02 PM

Sure, I understand. Tradeoffs. If you want really strong signal in a very short time, you either need a massive and excessively fast scope (14" Hyperstar), or you need to trade off resolution for SNR. 

 

All I was saying is, optimal sampling (sampling at nyquist in two dimensions) would require 3.3x oversampling, rather than more commonly quoted 2x oversampling (which applies to 1-dimensional signals), and that you can continue to realize improvements in sampling and resolution going to 4 pixel or 5 pixel sampling. Depending on what your imaging, sampling at 4x may not be a huge issue, if you can get enough faint signal in a manageable exposure time, it shouldn't matter. 



#21 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8742
  • Joined: 12 Oct 2007

Posted 19 March 2016 - 10:11 PM

I wouldn't use the term "optimal" for Nyquist sampling - because it isn't optimal either in terms of SNR or resolution. Larger pixels will decrease the impact of read noise - and smaller pixels will increase resolution but with diminishing returns as the pixels get smaller and smaller. There is no magic optimum to aim for. It's a continuous spectrum of tradeoffs and the best choice will depend on the goals and preferences of the imager.

And one thing to remember is that if you are getting 3" fwhm with 1" pixels and you decide to go to 0.5" pixels, your new images may have 2.5" fwhm due to the indirect benefits of smaller pixels on resolution.

I have about 0.57" pixels and my fwhm's are getting below 1.5" - so I will remove the reducer and go back to f/10 for certain objects - and I expect an improvement in resolution. But the field will be smaller and it won't be suitable for large, faint objects.

Frank

#22 leemr

leemr

    Ranger 4

  • -----
  • Posts: 372
  • Joined: 21 Mar 2014

Posted 19 March 2016 - 10:19 PM

Dumb question. Let's say I have a particular kit which nets me images with stars that have a FWHM of 2.2" Obviously this is a combination of seeing, guide errors, sampling, optics etc, but it means that my seeing is at least that good.

 

Assuming the "3x" rule is correct (for the sake of discussion only), does that mean that I should be sampling at < 2.2 / 3 = 0.73"/px if I want to be optimally sampled?



#23 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23586
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 19 March 2016 - 10:28 PM

I wouldn't use the term "optimal" for Nyquist sampling - because it isn't optimal either in terms of SNR or resolution. Larger pixels will decrease the impact of read noise - and smaller pixels will increase resolution but with diminishing returns as the pixels get smaller and smaller. There is no magic optimum to aim for. It's a continuous spectrum of tradeoffs and the best choice will depend on the goals and preferences of the imager.

And one thing to remember is that if you are getting 3" fwhm with 1" pixels and you decide to go to 0.5" pixels, your new images may have 2.5" fwhm due to the indirect benefits of smaller pixels on resolution.

I have about 0.57" pixels and my fwhm's are getting below 1.5" - so I will remove the reducer and go back to f/10 for certain objects - and I expect an improvement in resolution. But the field will be smaller and it won't be suitable for large, faint objects.

Frank

 

Optimal in terms of reproducing the structure of the original object, without going overboard on sampling and losing too much SNR. Below 3.3x sampling, your not going to be sampling enough to actually reproduce structure at the limit of your seeing. 

 

Ignoring the ability to actually resolve the detail that exists at a reasonable level, I agree that the ultimate goals of the imager trump sampling. I image at 2.14"/px most of the time, because I like the big huge fields packed with interesting things more than I care about resolving all the details my skies allow, and I rarely have clear skies long enough to mosaic at a higher resolution.



#24 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8742
  • Joined: 12 Oct 2007

Posted 20 March 2016 - 01:06 AM

Dumb question. Let's say I have a particular kit which nets me images with stars that have a FWHM of 2.2" Obviously this is a combination of seeing, guide errors, sampling, optics etc, but it means that my seeing is at least that good.
 
Assuming the "3x" rule is correct (for the sake of discussion only), does that mean that I should be sampling at < 2.2 / 3 = 0.73"/px if I want to be optimally sampled?


That's a pretty good summary because it's all based on the fwhm you can actually achieve with your equipment at your site. Of course - it assumes you have already been imaging with a ccd and you are considering changing it.

But I wouldn't use the word "optimal" - I'd just say it's "about right" if you want to get most of the resolution without sub-sampling too much and requiring longer sub exposures due to read noise, for example.

If you know you can get 2.2" with 1" pixels - you might do even better with 0.5" pixels. And if you are doing wide field imaging you may be happy with 4" fwhm and you can go ahead and use 2" pixels.

The thing about Nyquist is that it's a theorem and it makes a very strong claim. In certain sampling situations, if you sample at a certain frequency, you will recover the original signal exactly - with no loss at all. Sampling finer than that will gain absolutely nothing. It is already perfect. But real situations don't meet the requirements of the Nyquist theorem for it to apply - so instead it amounts to: Resolution increases with smaller pixels - but after a while it doesn't improve much. There is no sampling that has a magic or optimal benefit.

Frank
  • leemr likes this

#25 leemr

leemr

    Ranger 4

  • -----
  • Posts: 372
  • Joined: 21 Mar 2014

Posted 20 March 2016 - 01:26 AM

Thanks Frank, that helps a lot. My main interest is galaxies and I'm currently sampling at 1.11"/px with a Esprit 120 and a SX Trius 674. So far the best I've measured is ~1.8" FWHM, as measured by PixInsight, mid to low 2's being average.

 

Sounds like I'm hardware limited. Given the 120mm aperture I'll be diffraction limited (Dawes) at about 0.96", so if I wanted to go beyond that I'd be up for a new scope, possibly a new camera. And that would mean a huge outlay, because I'm a refractor fanboy and have enough problems with gear without throwing mirrors into the mix.

 

Cheers,

Lee

 

Edit: In other words, I really need to start drizzling (I already dither).


Edited by leemr, 20 March 2016 - 01:48 AM.



CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics