Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Optimum Camera Pixel Size vs. Focal Ratio

  • Please log in to reply
22 replies to this topic

#1 v3ngence

v3ngence

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 04 Jul 2016

Posted 03 December 2019 - 02:58 AM

I was trying to figure out the ideal pixel size for a given telescope, where the pixel resolution scale = the maximum resolving power of the telescope, and I noticed something rather interesting:

 

The optimized pixel size depends solely on the focal ratio (f/#) of a telescope!

 

I found that for a given pixel size there exists a fastest possible f/#, and conversely, for a given focal ratio there is a minimum pixel size.

 

image.png

 

For example, my f/6 refractor won't be able to show any more detail if the pixel size is below 3.37um.

And my Nikon D5300 with 3.9um pixels really isn't capturing all the detail with anything faster than f/7!

 

Here is my worksheet, it is using the common Dawe's limit formula's but let me know if anyone sees any errors:

Attached Files


Edited by v3ngence, 03 December 2019 - 03:01 AM.

  • bugbit, ks__observer and Louisv28 like this

#2 james7ca

james7ca

    Cosmos

  • *****
  • Posts: 7,620
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 03 December 2019 - 06:28 AM

For so-called critical sampling as used for planetary imaging the generally accepted "rule" that relates pixel size to f-ratio is that for a mono camera you want to use an f-ratio that is equal to five times the camera pixel size as measured in microns. So, if you have a camera with a pixel size of 3.37um you'd want to use an f-ratio of 5 x 3.37 ≈ f/17. Furthermore, if you are using a one-shot-color camera you probably want to use an even bigger multiplier, which I've often suggested might be about 7.5X.

 

That said, you generally don't want to image DSOs at those kinds of image scales since your seeing conditions are going to limit what you can actually resolve. Thus for DSO imaging people usually try to estimate their typical seeing conditions (which might be around 2 arc seconds) and then take one half to one third of that for their image scale. In this case, you go by the focal length, not the f-ratio, since focal length and pixel size determine your image scale.

 

In any case, there is no law that says you have to stay exactly at these recommendations, but if you deviate too far from the above then you are most likely not getting all of the resolution that is possible for your setup or you are penalizing yourself by requiring unnecessarily long exposure times (to compensate for the slower f-ratio). For DSOs you also want to sample sufficiently so that your stars are recorded with at least two pixels (well, you can go below that but you don't want to approach one pixel stars as one pixel can't make a round looking star).


  • DSO_Viewer, Gene3 and v3ngence like this

#3 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 23,859
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 03 December 2019 - 03:46 PM

Jame's answer is excellent. 

 

The thing about "optimal" is when it comes to DSO, unlike SSO, it often depends a lot on what you want to do. If you want the best resolution possible for a given amount of seeing, then the optimal pixel size in angular terms (arcseconds) will be around 1/3rd the seeing conditions in arcseconds. However, aperture does play a role in how the signal gets distributed by seeing, and to get the best resolution possible, you need to pair the right pixel size with the right aperture, which tends to be somewhere between 8-10" unless your seeing is usually excellent (in which case, you could use a larger aperture scope).

 

But high resolution imaging is not the only kind of imaging. You can image DSOs at any size from tiny objects that are highly magnified, to medium size objects, to large objects, to giant complex fields full of many objects and stars or parts of the milky way. There are also different expectations and goals in terms of how fast you can image something. Depending on your goals, you may find that a larger sensor is much more important than the size of the pixel, or that big pixels are desirable because they accumulate more signal in a given time. Some of the best images ever made were with 9 micron pixels on ultra short very fast f/3-f/4 telescopes....the depth of signal you can get with such equipment is unparalleled (to the point where even very high amounts of read noise simply do not matter.) 

 

So, optimal...may be in the eye of the imager. 


  • Gene3 and v3ngence like this

#4 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7,226
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 03 December 2019 - 04:39 PM

I was trying to figure out the ideal pixel size for a given telescope, where the pixel resolution scale = the maximum resolving power of the telescope, and I noticed something rather interesting:

 

The optimized pixel size depends solely on the focal ratio (f/#) of a telescope!

 

I found that for a given pixel size there exists a fastest possible f/#, and conversely, for a given focal ratio there is a minimum pixel size.

 

image.png

 

For example, my f/6 refractor won't be able to show any more detail if the pixel size is below 3.37um.

And my Nikon D5300 with 3.9um pixels really isn't capturing all the detail with anything faster than f/7!

 

Here is my worksheet, it is using the common Dawe's limit formula's but let me know if anyone sees any errors:

 

lol.gif lol.gif lol.gif

 

I've emphasized this fact in dozens of posts over the years so I'm delighted to see that you were able to discover it on your own.  Welcome to the world of optics...and good work!  Keep at it because there are a lot more things like this to discover.   waytogo.gif

 

 

John


  • v3ngence likes this

#5 DmitriNet

DmitriNet

    Vostok 1

  • -----
  • Posts: 140
  • Joined: 13 May 2017

Posted 03 December 2019 - 04:46 PM

All these nice min-max assume that pixels are uniform.  In reality, oversampling allows to somewhat average over several pixels and get rid of cosmic rays, hot pixels and non-uniform sensitivity.


  • v3ngence likes this

#6 Jared

Jared

    Fly Me to the Moon

  • *****
  • Posts: 6,114
  • Joined: 11 Oct 2005
  • Loc: Piedmont, California, U.S.

Posted 03 December 2019 - 06:22 PM

All these nice min-max assume that pixels are uniform.  In reality, oversampling allows to somewhat average over several pixels and get rid of cosmic rays, hot pixels and non-uniform sensitivity.

Cosmic rays, satellite trails, and beta decay in the optics are generally addressed by combining multiple images using some sort of outlier rejection algorithm.  Likewise, hot pixels and non-uniformity are generally addressed with dithering multiple exposures.  No over sampling required with good data collection and reduction techniques.  


Edited by Jared, 03 December 2019 - 06:24 PM.

  • v3ngence likes this

#7 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8,983
  • Joined: 12 Oct 2007

Posted 04 December 2019 - 02:26 AM

The topic of "optimal" pixel size keeps coming up in CN - and I see the same tropes being repeated - but they ignore fairly obvious factors that make the situation quite different from typical "Nyquist" scenarios.

 

My recent thread is here:  https://www.cloudyni...-vs-big-pixels/

 

One main point is that deep sky imaging doesn't just involve capturing the sky on a sensor.  It involves an elaborate pipeline of star centroid measurement, alignment, interpolation and stacking - and errors are introduced *on the scale of the pixels*.  That means that if you have smaller pixels, in arc-seconds, you will have less associated error - in arc-seconds.  And that will translate to smaller fwhm's in the final stack - in arc-seconds.

 

Many other factors play a role *on the scale of the pixels*.

 

The other main point is so obvious that for some reason people can't even grasp it.  In Nyquist theory, you have a continuous, band-limited signal - and you sample it discretely at a certain minimum rate - and if you do that - it allows you to reconstruct the original *continuous* signal exactly by proper post-filtering of those samples.  But that isn't happening at all in deep sky imaging.  You just sample with pixels - and then you look at the pixels.  And due to alignment and interpolation - those pixels get blurred.  There is no recovery of the original continuous signal - you just look at those samples.

 

So - if you don't like looking at blocky stars with only a few pixels across - go finer and the result will look better.

 

But, more importantly, if you have very low signal and even Nyquist-sized pixels look noisy - go ahead and bin or smooth - and effectively use much larger pixels than "optimal."

 

There is no "optimum" at all.  With high signal - if you care about how stars look close up - use very small pixels.  If you find it is too noisy looking - bin or smooth them.  The dominant factors are sensor noise, and how the imager wants stars to look on the scale of pixels.

 

When ccd's had high read noise there was more of a natural drive to keep the pixels big.  But with much lower noise sensors these days - the impressions people have from those times no longer apply.  You could record the exact x, y location of every single photon received and make a nice image at whatever level of pixel size you want in the final presentation - which corresponds to imaging with infinitesimally small pixels.  No problem.

 

If you have a sensor with no noise at all, the optimum pixel size would simply be - as small as possible.  If the sensor has some noise but the noise is very small - as most cmos sensors these days are - then you likely don't need to worry about pixels being too small.  You can always bin or smooth based on how much time you have spent on the object - and how much you want to stretch it to reveal faint stuff.

 

Frank


Edited by freestar8n, 04 December 2019 - 03:00 AM.

  • leviathan, Jon Rista, v3ngence and 6 others like this

#8 Allaboutastro

Allaboutastro

    Explorer 1

  • -----
  • Posts: 56
  • Joined: 10 Jan 2010
  • Loc: Grapevine, TX

Posted 04 December 2019 - 10:44 AM

Frank: 

 

Absolutely well-written.   It's refreshing to see this thought now become so common place...it used to be very hard to convince people that this was the case.    That said, while today's sensors make it easier to accomplish in practice, even back with our high read-noise CCDs it wasn't all that different...we just imaged to the sky-limit to avoid read-noise's impact on our images.   This was always pretty easy to do in practice.  Rather, back then, it was the dark current characteristics, defects, blooming, and amp glow (pattern disuniformity) that caused the bigger headaches.   Plus, as you said, our choices were more limited, meaning that the better performing cameras tended to be those that gave us bigger pixels than we wanted. 

 

But man, I'm loving the new QHY600.   It's amazing how far we've come...and how much today's cameras are to the "ideal" cameras we used to try to emulate with our practices. 


  • freestar8n and v3ngence like this

#9 pyrasanth

pyrasanth

    Surveyor 1

  • *****
  • Posts: 1,984
  • Joined: 08 Jan 2016

Posted 04 December 2019 - 01:05 PM

Are we getting a bit of a blunt stick with the RASA telescopes then?- to get optimal for F2.2 we need a camera with 1.5 um pixels which I have not seen- I think the smallest I've seen is 2.4 um- if you know anything smaller let me know!



#10 DmitriNet

DmitriNet

    Vostok 1

  • -----
  • Posts: 140
  • Joined: 13 May 2017

Posted 04 December 2019 - 01:16 PM

Cosmic rays, satellite trails, and beta decay in the optics are generally addressed by combining multiple images using some sort of outlier rejection algorithm.  Likewise, hot pixels and non-uniformity are generally addressed with dithering multiple exposures.  No over sampling required with good data collection and reduction techniques.  

This assumes the stars are unchanging.  Actually more than half are variable. Single shot should be as accurate as possible and oversampling is the way to achieve this.  Stacking is just a necessary evil when signal is too weak.



#11 v3ngence

v3ngence

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 04 Jul 2016

Posted 04 December 2019 - 05:54 PM

Are we getting a bit of a blunt stick with the RASA telescopes then?- to get optimal for F2.2 we need a camera with 1.5 um pixels which I have not seen- I think the smallest I've seen is 2.4 um- if you know anything smaller let me know!

 

That's what I thought at first but there has been a lot of good discussion in this thread and I believe Frank has it right:

 

 

...errors are introduced *on the scale of the pixels*.  That means that if you have smaller pixels, in arc-seconds, you will have less associated error - in arc-seconds.  And that will translate to smaller fwhm's in the final stack - in arc-seconds.

 

...So - if you don't like looking at blocky stars with only a few pixels across - go finer and the result will look better

 

Frank

 

My calculation was for the "critical sampling" point where the pixel resolution scale = the maximum resolving power of the telescope, but that is also the point where a star = 1 pixel which is no good for DSO's, so the f/2 RASA should still be great for imaging!


Edited by v3ngence, 04 December 2019 - 05:56 PM.

  • freestar8n likes this

#12 Stamos

Stamos

    Vostok 1

  • -----
  • Posts: 155
  • Joined: 30 Oct 2015
  • Loc: Athens, Greece

Posted 05 December 2019 - 07:25 AM

Just to make sure i got it right, does this mean that for a given pixel size the resolution depends solely from the focal ratio?

For example, for my 183 cameras (2.4μm pixels) there is not any benefit if i choose to go for the 11inch rasa instead for the 8inch one...?

I'm sure i'm missing something here...


  • v3ngence likes this

#13 v3ngence

v3ngence

    Sputnik

  • -----
  • topic starter
  • Posts: 48
  • Joined: 04 Jul 2016

Posted 05 December 2019 - 08:01 PM

Just to make sure i got it right, does this mean that for a given pixel size the resolution depends solely from the focal ratio?

For example, for my 183 cameras (2.4μm pixels) there is not any benefit if i choose to go for the 11inch rasa instead for the 8inch one...?

I'm sure i'm missing something here...

 

Not the actual arcsecond resolution, but the point where the resolution of the scope based on it's aperture matches the pixel resolution based on the focal length. I'm pretty sure if you had absolutely perfect guiding, this is the point where a star would be 1 pixel. But in the real world I think you can get away with it due to normal guiding errors.

 

For your 2.4 um pixels the critical sampling point would be a scope with Focal Ratio = 4.3   (F.R = Pixel Size*206.265/116)

Faster FR scopes can resolve more detail than the pixels can use and stars would be small, approaching 1 pixel, and slower FR scopes have less detail than the pixel scale and the stars would be larger and made of multiple pixels no matter how good the seeing or guiding is.


  • Stamos likes this

#14 james7ca

james7ca

    Cosmos

  • *****
  • Posts: 7,620
  • Joined: 21 May 2011
  • Loc: San Diego, CA

Posted 06 December 2019 - 01:52 AM

Just to make sure i got it right, does this mean that for a given pixel size the resolution depends solely from the focal ratio?

For example, for my 183 cameras (2.4μm pixels) there is not any benefit if i choose to go for the 11inch rasa instead for the 8inch one...?

I'm sure i'm missing something here...

The f-ratio only determines the size of the Airy disk at the focal plane, while aperture determines the angular or arc-second resolution on the target itself. So, it really depends upon what you consider "resolution." However, I'd argue that in astrophotography the most commonly used definition for resolution would be the capability to produce a greater amount of detail in the target itself and that form of resolution scales directly with aperture (not f-ratio).

 

As for "critical sampling," I'd refer back to the rule of thumb I presented earlier, that being that you'd want to use an f-ratio that is about five times the size of the camera's pixels (with caveats, see my earlier post).

 

Here is a link to an online calculator where you can input values of aperture and/or f-ratio to see how those change the size of the Airy disk produced by an optical system:

 

  http://www.wilmslowa...rmulae.htm#Airy

 

Aperture determines the angular resolution while f-ratio determines the size of the Airy disk at the plane of focus (at the sensor itself).

 

In terms of either an 8" or 11" RASA, it's kind of complicated. However, at any given image scale a larger aperture will generally produce a "faster" imaging system. However, in terms of exposure for any given camera (pixel size) what really matters is the f-ratio (at least for extended objects, point sources like stars also respond to aperture).

 

So, if you use the same camera (pixel size) on each of those systems and for extended objects you will see no difference in the exposure time needed to reach a given level of signal to noise (since the f-ratios are the same, or nearly so). However, the 11" RASA will produce a larger image scale since it has a longer focal length which might mean greater detail if your technique and seeing conditions allow for that possibility.

 

On the flip side, if you could pair the 11" RASA with a camera that had larger pixels to the effect that both systems delivered the same image scale then the larger aperture scope would likely be the "faster" imaging system (since a larger pixel can gather more light/photons per unit of exposure time). Of course, if you were using the same camera on both the 11" RASA would produce a greater image scale (as discussed earlier) and you could resample the image from the larger scope to match the image scale produced on the smaller scope and gain some slight improvement in signal to noise.

 

Thus, assuming that you are NOT seeing or technique limited the 11" RASA should allow some image quality benefits. Now, would those benefits justify the differences in the cost and size of the competing systems? Well, that's kind of up to each individual person and pocketbook. That said, I think there are some system differences between the 8" RASA and the larger models, so you'd want to weight those factors too.


Edited by james7ca, 06 December 2019 - 01:58 AM.

  • Stamos likes this

#15 bortle2

bortle2

    Vostok 1

  • -----
  • Posts: 116
  • Joined: 18 Sep 2019

Posted 06 December 2019 - 07:59 AM

My recent thread is here:  https://www.cloudyni...-vs-big-pixels/

Thanks for the link; read it through (took some time...) and quite enjoyed it.

 

Even though you call it a "recent thread", quite a few things did change since then, namely there are now CMOS sensors with 16-bit ADCs, higher FWCs, etc. (let alone outstanding QEs), so it would be more difficult to "attack" you on those grounds these days.

 

But main issue was that some opponents simply didn't listen to what exactly you were saying... As a result, part of the conversation did read as "5c coin is bigger than 10c coin, so it's better!"

 

I expect these debates to die down as time passes, as they did in photographic community a while ago; the last bastion will probably be that large pixels of CCD sensors possess some intangible esoteric properties that can't replicated with small-pixel CMOS sensors... Well, we've seen that in photographic community as well, and they, too, died down and then disappeared after the advent of Sony 50/100Mp MF sensors. Just give it some time...


  • freestar8n and Allaboutastro like this

#16 Allaboutastro

Allaboutastro

    Explorer 1

  • -----
  • Posts: 56
  • Joined: 10 Jan 2010
  • Loc: Grapevine, TX

Posted 06 December 2019 - 11:36 AM

At some point, it's not going to matter anymore.   There will never be a reason to buy the larger pixel chip when there's a smaller pixel chip that does it just as good.   The versatility with smaller pixels is that you get the maximum resolution when you require it, yet you can bin pixels when you don't require it.   Cameras are so good now.  

 

I mentioned in a recent thread, as the owner of both a FLI PL-16803 and a QHY 600, there's really no reason for me to choose the 16803 camera anymore.   While the FLI has more real-estate,  the increase in QE with the Sony chip means that I can pretty much just mosaic my field in the same amount of time that the CCD sensor can with a single frame.   And it's far more versatile since the smaller pixels yield more high-res options with a wider variety of instruments.  

 

As such, not only are we seeing the demise of CCD chips in our hobby at the hands of new BSI CMOS sensors, at some point we won't need a chart to tell us "optimal sampling"...will we just buy a sensor of the size we can afford in the smallest pixel they offer.  In this way, that camera will be able to be used with our C-14 or our ED80, taking full advantage of the focal lengths of those instruments all with a single camera.   

 

EDIT:  There is a disadvantage to smaller pixels...it makes the file sizes ridiculously large.   While it's an acceptable tradeoff for what it offers, you better consider building a new PC if you hope to process your images in a reasonable amount of time.  :)


Edited by Allaboutastro, 06 December 2019 - 12:06 PM.

  • leviathan, v3ngence and Louisv28 like this

#17 ks__observer

ks__observer

    Apollo

  • *****
  • Posts: 1,025
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 06 December 2019 - 10:03 PM

Agree above re binning:

Small pixels are most versatile.

 

Airy Disk Size:

For DSO's you will reach theoretical AD size with smaller apertures.  

For smaller apertures f-ratio will control AD size as opposed to seeing. 

People say below 100mm, f-ratio controls over seeing.

My SV70 has # pixels accross my FWHM very close to theoretical  -- seeing has little affect.


Edited by ks__observer, 07 December 2019 - 02:04 AM.


#18 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7,226
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 07 December 2019 - 12:48 AM

At some point, it's not going to matter anymore.   There will never be a reason to buy the larger pixel chip when there's a smaller pixel chip that does it just as good.   The versatility with smaller pixels is that you get the maximum resolution when you require it, yet you can bin pixels when you don't require it.   Cameras are so good now.  

 

I mentioned in a recent thread, as the owner of both a FLI PL-16803 and a QHY 600, there's really no reason for me to choose the 16803 camera anymore.   While the FLI has more real-estate,  the increase in QE with the Sony chip means that I can pretty much just mosaic my field in the same amount of time that the CCD sensor can with a single frame.   And it's far more versatile since the smaller pixels yield more high-res options with a wider variety of instruments.  

 

As such, not only are we seeing the demise of CCD chips in our hobby at the hands of new BSI CMOS sensors, at some point we won't need a chart to tell us "optimal sampling"...will we just buy a sensor of the size we can afford in the smallest pixel they offer.  In this way, that camera will be able to be used with our C-14 or our ED80, taking full advantage of the focal lengths of those instruments all with a single camera.   

 

EDIT:  There is a disadvantage to smaller pixels...it makes the file sizes ridiculously large.   While it's an acceptable tradeoff for what it offers, you better consider building a new PC if you hope to process your images in a reasonable amount of time.  smile.gif

 

Unfortunately, there are some screw-ball statements about sampling floating around in this thread that I'd like to try to straighten out.  Every optical system acts like a bandpass filter.  The optical transfer function is what describes that filter and the real part of that function is the MTF.  For an ideal circular aperture, the maximum spatial frequency transmitted by the telescope in the focal plane is given by 1/(lambda*F/#).  In a pure mathematical sense, sampling the image at a rate greater than 4.88 sample points across the Airy Diameter will produce no addition information. Using 4.88 samples across the Airy disk diameter is the maximum rate needed to perfectly reconstruct the signal passed by the optics.   Now, in the real world, we don't sample using ideal mathematical delta sampling functions so that answer isn't quite perfect.  In the real world, we sample with rectangular pixels and contrary to what some might have you believe, there actually are ways to model that process exactly using physical optics.  Mathematically, it involves first smearing the image by convolving the image with a rectangle function the same size as a pixel before it is sampled by a two dimensional comb sampling function.  That smearing process has the effect of making the system both no longer bandwidth limited but it also (and more importantly) smears the high frequency information a little bit, which has the effect of making the image a tiny bit more blurry.  It might seem like there is no end to recovering more information from the image by using smaller pixels but there is a serious penalty in that smaller pixels intercept less of the signal, which in turn, increases the noise in the image at a rate that makes it difficult to recover any additional detail--even out to the theoretical Nyquist limit.

 

That's how it all works under perfect conditions but when you mix in the effects of seeing on long exposure imaging, things get even worse.  The attached chart shows the effect on the MTF under various seeing conditions for an F/10.8 system.  As you can see, the effect of seeing is to act as a low-pass filter, effectively rolling off the response of the system to high spatial frequencies.  In the example that I computed here, it shows the bandpass limit when sampling 2-3 square pixels across the Airy diameter.  For the most common "good-seeing" conditions in the range of 1.5" - 2.0", sampling at even just two pixels across the Airy disk is going to have a very minor effect on picking up more image detail.  Clearly under exceptional conditions of 0.5" seeing, even 3 pixels across the Airy disk might be giving up high spatial frequency information, which always translates into perceived image sharpness.  The point is that for most common conditions worse than about 1" seeing, sampling at a rate higher than about 3 pixels across the Airy disk is simply decreasing SNR without making the image any sharper.  And for most of us under 2" skies, 2 pixels across the Airy disk is about as far as we can go.

 

The jump in the performance of your CMOS camera relative to the 16803 sensor may be mostly due to the higher responsivity of Sony sensor.  The higher responsivity makes up for the smaller pixels so you see a similar signal with higher sampling, which is great.  However, there actually is a point when making the pixels smaller gains nothing in detail and merely increases noise.  That's why it's worth the effort to avoid sampling beyond the point where you gain nothing in image detail.  That's what we call "optimizing the pixel size."  Use pixels small enough so that you are picking up maximum detail considering your current conditions and you'll get the most detail without sacrificing SNR.

 

John

Attached Thumbnails

  • Seeing MTF Curve.jpg

Edited by jhayes_tucson, 07 December 2019 - 12:50 AM.

  • james7ca, Jon Rista, Der_Pit and 1 other like this

#19 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8,983
  • Joined: 12 Oct 2007

Posted 07 December 2019 - 02:19 AM

Thanks for the link; read it through (took some time...) and quite enjoyed it.

 

Even though you call it a "recent thread", quite a few things did change since then, namely there are now CMOS sensors with 16-bit ADCs, higher FWCs, etc. (let alone outstanding QEs), so it would be more difficult to "attack" you on those grounds these days.

 

But main issue was that some opponents simply didn't listen to what exactly you were saying... As a result, part of the conversation did read as "5c coin is bigger than 10c coin, so it's better!"

 

I expect these debates to die down as time passes, as they did in photographic community a while ago; the last bastion will probably be that large pixels of CCD sensors possess some intangible esoteric properties that can't replicated with small-pixel CMOS sensors... Well, we've seen that in photographic community as well, and they, too, died down and then disappeared after the advent of Sony 50/100Mp MF sensors. Just give it some time...

Thanks for the comments.  Yes a lot of misconceptions take some time on CN to clear up - but despite some forward progress combined with backward  - overall I have seen a general trend toward improved understanding on many topics.

 

In the case of the misapplication of Nyquist and MTF stuff - a key problem is that people just don't read the fine print of the theory - and they don't realize how different imaging is - for the purpose of actually looking at an image with your eyes at different scales - vs. a quantitative application where, for example, you are measuring the split of a double star based on a model of the data.

 

But I can't fathom why people can't grasp that nothing about Nyquist says you should not oversample - and I provide engineering examples in the thread where oversampling is beneficial for many reasons.  Smaller pixels with smaller full wells don't matter at all if they oversample because they collect less light - and if the effective read noise ends up less - they will be superior in every way - once you resample as you choose for the final presentation.  Nyquist says that both critical sampling and oversampling will work perfectly well - as long as you properly filter the final result - and you make measurements at discrete points.  But no one is doing either of those things anyway in astro imaging - so I have no idea why anyone feels Nyquist applies in any useful way.

 

It's like applying the Pythagorean theorem to an arbitrary triangle - and saying "Pythagoras says..."  Are all conditions of the theorem met?

 

Nyquist is at best a minimum *representation* rate for the final image - rather than optimal - and you are fine greatly oversampling during acquisition.  And if the signal is low you may prefer to smooth it even further and undersample.  And if you want nicely sampled stars, for smaller final fwhm and less blocky profiles - use smaller pixels so the end result looks better in the final representation.   That is a personal preference and Nyquist's opinion is not relevant.

 

And none of this really requires thought or calculation.  Just go ahead and sample finely - process the image - and then bin or smooth as desired for the final result.  It is WYSIWIG - and small pixels give you more options than large pixels.  Better control over processing - etc. etc.

 

The photographic community seems to have a good grasp of MTF and also an appreciation for the benefits of oversampling in dslr's - so I hope astro-imagers can catch up.  I am optimistic that they will since a lot of this stuff is common sense.

 

Frank


  • SteveInNZ likes this

#20 jhayes_tucson

jhayes_tucson

    Fly Me to the Moon

  • *****
  • Posts: 7,226
  • Joined: 26 Aug 2012
  • Loc: Bend, OR

Posted 07 December 2019 - 12:48 PM

Frank is correct that the Nyquist theorem says nothing to discourage over sampling--and I have never seen that claim made here on CN.  It is merely a theorem that shows that it is possible to perfectly recover a band-limited waveform by discretely sampling at a rate twice the maximum frequency content.  It is perfectly fine to oversample--even though in the real world, there are some penalties that grow as you sample further and further beyond the Nyquist limit.

 

I have a different view of this stuff than Frank.  Understanding Nyquist is a good starting point for understanding how to optimize a system and that's why it's useful to combine an understanding of physical optics with sampling theory when analyzing sampled imaging systems.  This is one of the things that optical engineers do!  To use Frank's analogy:  Understanding right angled triangles is a really good starting point for understanding arbitrary triangles!

 

It may be true that some folks may want more highly sampled star images, but I'm not one one who gives that notion much thought--mainly because I want to stretch my data to better emphasize weak signals.  The important thing is the SNR in the dark regions of an image and that's what determines how much you can stretch the data.  (Remember that I'm talking about photon noise here.)  When you go beyond the Nyquist limit, you reach a point of steeply diminished returns on image detail and at that point you are simply decreasing SNR--and that shows up most prominently at the lower end of the signal where the most interesting detail often lies.  You can certainly exceed the Nyquist sampling rate but you aren't going to pick up any more image detail and you'll just be decreasing signal and increasing noise.  Yes, you can bin a sensor with really small pixels to improve SNR, but then you are being pushed back toward the "sweet spot" for sampling given your local seeing conditions.

 

I should point out that with a F/10 system, the Airy disk diameter will be about 13 microns so if you ignore seeing, the Nyquist sample spacing is only about 2.7 microns.  That's as small or smaller than what I think anyone is currently using.  As I showed above, if you sample between 6.5 and 4.3 microns, you'll lose almost nothing in detail under 1.5"- 2" seeing conditions and gain a factor of 5.7 to 2.5 in SNR over the 2.7 micron case.

 

In the photographic world, DSLRs don't benefit from small pixels any more than they do in the AP world.  There's just more light and the design criteria are different.  For one thing, manufacturers are simply building cameras that are designed to work with very fast lenses.  For example, a diffraction limited F/1.2 lens has an Airy disk of about 1.6 microns and no current sensor can properly sample such a lens!  The biggest driver for CMOS was originally the need for video.  Clearly all of those requirements have driven the technology to a very high level, which is fantastic; but, I hope that no one goes away believing that the physics of commercial photography is any different than for low-light AP imaging systems.  The priorities might be different, but the physics is the same!

 

Common sense may tell you that using arbitrarily small pixels will always improve image detail, but that's simply not true.

 

 

John


  • SteveInNZ and Der_Pit like this

#21 freestar8n

freestar8n

    Vendor - MetaGuide

  • *****
  • Vendors
  • Posts: 8,983
  • Joined: 12 Oct 2007

Posted 07 December 2019 - 04:32 PM

Fortunately I gather from other threads that people are coming to their senses that the fineness with which you acquire data can be reduced by binning or smoothing - and there is no need to be concerned about oversampling with small pixels as long as the noise in the final result is acceptable.  These days, the noise after binning with small pixel cmos may be much less than that of larger pixel ccd - in which case the smaller pixels are superior in every way.

 

In addition, there are obvious ways that I have described where the image is smoothed *on the scale of the pixels* - and therefore smaller pixels will indeed provide resolution gains beyond an idealized imaging system.  Planetary imagers striving for max detail have learned these benefits through years of experimentation - and are happy to ignore simple theoretical guidelines that don't apply.

 

These are just the obvious reasons smaller pixels should not be avoided.  There are many more I have described in other threads - along with references, worked examples - etc.

 

Frank


  • v3ngence likes this

#22 ks__observer

ks__observer

    Apollo

  • *****
  • Posts: 1,025
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 08 December 2019 - 05:33 AM

My SV70 has # pixels accross my FWHM very close to theoretical  -- seeing has little affect.

I take this back.

Last night I was analyzing some of my SV70 data again in PI -- I was not as close as I thought.



#23 ks__observer

ks__observer

    Apollo

  • *****
  • Posts: 1,025
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 08 December 2019 - 06:55 AM

I take this back.

Last night I was analyzing some of my SV70 data again in PI -- I was not as close as I thought.

I take back the take-back to some degree:

Just analyzed 3 nights of SV70 data with PixInsight subframe selector -- calculating FWHM (pixels).

To calculate theoretical I used  http://www.wilmslowa...rmulae.htm#Airy

I use an ASI-1600 - 3.8um pixels.

I assumed FWHM = .58 * AD

For SV70 at f4.8:

My actual spatial size divided by theoretical was: 1.25, 1.49, 1.59

Compared to my 8in f/3.9 Newt:

I usually get between 2.2 and 2.5 FWHM in pixels.

Actual spatial size divided by theoretical: 2.64, 2.66

So clearly with small aperture DSO work you operating closer to theoretical optical limits -- with less influence from seeing.

So sampling rate (pixels across FWHM) for small aperture will be heavily controlled by f-ratio compared to seeing.

So those super widefield, or even Milky Way, shots at 10+ asp are not really as under-sampled as you might be led to believe if you were just looking at arc-sec/pixel.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics