•

Why does the f-num = 5x pixel size apply for colour cameras?

12 replies to this topic

#1 Tulloch

Tulloch

Skylab

• topic starter
• Posts: 4,189
• Joined: 02 Mar 2019
• Loc: Melbourne, Australia

Posted 05 April 2020 - 03:41 AM

Seems like it's the season for self-isolation, sitting at home and thinking about the status quo of various stuff.

Looking at the excellent images recently posted here by Darren at a focal ratio that was 7x the pixel size of his ASI290MC camera made me wonder, why should the 5x pixel size "rule" hold for colour cameras?

I have always been a little, lets say, uncomfortable by the maths that was used to come to this rule of thumb. The working below is probably the easiest to understand version of this rule, which relies on the seemingly arbitrary assumption that the Full Width Half Maximum (FWHM) needs around 2.5 - 3 pixels to completely resolve between peaks. This leads to the well accepted value for f-number to be around 5x the pixel size of the camera, at least for green light.

But what about blue light with a wavelength of around 460 nm? The formula recommends an f-number of around f/6.

What about a colour sensor with a RGGB bayer matrix? The red and blue "pixels" are actually separated by 2x the actual pixel size. Even the green pixels are separated by sqrt(2) times the pixel size, leading to a value of f-number around f/7, not f/5.

Am I missing something here? Or am I sitting at home being bored and trying to find something to do?

Thanks,

Andrew

Attached Thumbnails

Edited by Tulloch, 05 April 2020 - 08:22 PM.

• EuropaWill and Lacaille like this

#2 Tom Glenn

Tom Glenn

Soyuz

• Posts: 3,786
• Joined: 07 Feb 2018
• Loc: San Diego, CA

Posted 05 April 2020 - 04:04 AM

Your're not missing anything, Andrew.  Like all "rules of thumb", there are many assumptions.  The way I see it, there is nothing special about 2.5-3 pixels resolving a FWMH of the Airy disk.  This is a minimum.  You could extend this to 4-5 pixels, except that photons aren't free, and so when you extend your focal length, the image starts to get so dim that SNR plummets.  Extra scale helps deconvolution parameters, but unfortunately, all of this is worthless if the underlying SNR is very poor, which it often will be.  Slightly under sampling, even in the 4x-5x pixel size for the f-ratio, can sometimes outperform higher ratios, but there are some people in some locations that can take advantage of outstanding conditions to go higher.  For most people, in most locations, the gamble is never worth it.  But exceptions abound.

• roelb, Lacaille and Tulloch like this

#3 Tulloch

Tulloch

Skylab

• topic starter
• Posts: 4,189
• Joined: 02 Mar 2019
• Loc: Melbourne, Australia

Posted 05 April 2020 - 05:34 AM

Your're not missing anything, Andrew.  Like all "rules of thumb", there are many assumptions.  The way I see it, there is nothing special about 2.5-3 pixels resolving a FWMH of the Airy disk.  This is a minimum.  You could extend this to 4-5 pixels, except that photons aren't free, and so when you extend your focal length, the image starts to get so dim that SNR plummets.  Extra scale helps deconvolution parameters, but unfortunately, all of this is worthless if the underlying SNR is very poor, which it often will be.  Slightly under sampling, even in the 4x-5x pixel size for the f-ratio, can sometimes outperform higher ratios, but there are some people in some locations that can take advantage of outstanding conditions to go higher.  For most people, in most locations, the gamble is never worth it.  But exceptions abound.

Thanks Tom, I understand that, but the formula was calculated with no consideration given to the sensitivity of the camera involved. Consider a purely theoretical camera that had 5, 10 or even 100x the sensitivity of the current crop of cameras and SNR was not a consideration, would we still be stipulating that focal ratio should be the 5x pixel size?

Now, obviously, the current cameras are running at quantum efficiencies pushing 80% so the point is moot, but just a theoretical exercise...

#4 DMach

DMach

Surveyor 1

• Posts: 1,650
• Joined: 21 Nov 2017
• Loc: The most light-polluted country in the world :(

Posted 05 April 2020 - 05:54 AM

My understanding: you can push this, but for most cases the extra focal length will be "wasted" as the level of fine detail will ultimately be limited by the seeing. Therefore you're sacrificing signal-to-noise (as Tom states) for no gain in resolution.

It's also worth noting (as I've freely admitted before) that I had no idea what I was doing when I purchased the ASI290MC, so I chose based on specs and claims like "perfect for planetary imaging" (largely based on max frame rate). If I had my time again, I'd have chosen the ASI224 lol.

I've seen excellent examples on both sides of the fence, for example:

• Avani Soares, who uses the ASI290 (2.9um pixels) with a 2x PowerMate (6.9x ratio).

• Niall MacNeill, who uses the ASI174 (5.86um pixels) with a 2.5x PowerMate (4.3x ratio).

My conclusion: seeing matters more than the choice of camera lol.

It may also help in my case that, as I live near the equator, I can (more often than not) image targets whilst high in the sky.

• ToxMan likes this

#5 MalVeauX

MalVeauX

Voyager 1

• Posts: 10,461
• Joined: 25 Feb 2016
• Loc: Florida

Posted 05 April 2020 - 07:57 AM

Hrm,

Doesn't this concept basically come down to a star on a pixel needing to have more pixels around it to differentiate it from a separate signal given the blur of the airy disc, seeing assumed to be not limited? Then modify it a bit, to allow for sampling of the shorter wavelengths (blue) which sample at longer focal-ratios (despite the reds being lower resolution and being pushed to over-sampling)? But even then, the idea is that its the minimum to differentiate the data, not ideal, minimum. So 3x is the minimum basically. Ideally, you want a bit more. But because around 3 is the minimum, it's an assumption many use that for critical sampling, you start pixel spacing being around 3 times the resolution of the scope. That's the minimum limit to record the resolution, and there's really no maximum other than the limit of the scope's resolution at that point, assuming perfect seeing. So you can calculate what the minimum is (around 3 times the scope's resolution), and that's where you start. From there you balance the idea of signal to noise, seeing conditions, etc, to see how much more pixel spacing you can put on that signal reasonably. As cameras get better and more sensitive, assuming seeing is perfect, you'd be able to get more potential angular resolution recorded, to the limit of the airy disc, assuming seeing is perfect. In reality, seeing will not be perfect, it will be the limit, not the airy disc.

I think?

Very best,

#6 Tulloch

Tulloch

Skylab

• topic starter
• Posts: 4,189
• Joined: 02 Mar 2019
• Loc: Melbourne, Australia

Posted 05 April 2020 - 04:31 PM

Thanks for your replies, I really appreciate the discussion.

I understand the requirements to set a reasonable limit on focal ratio with the proviso given on seeing, aperture etc, but the key point I was trying to make was that the calculation attached above assumed a mono camera, where all the pixels are identical and the pixel size = pixel pitch. In a colour camera, this is not the case, where the red and blue pixels are separated by twice the pixel size, and the green by sqrt(2) times the pixel size.

This was noted in the original calculation, where the author stated that for colour cameras and excellent seeing, the value of 5x pixel size should be (slightly) increased. I guess my question (or discussion point) is "by how much?"

Andrew

#7 DMach

DMach

Surveyor 1

• Posts: 1,650
• Joined: 21 Nov 2017
• Loc: The most light-polluted country in the world :(

Posted 05 April 2020 - 11:29 PM

At this point, I suspect we'll divert to a discussion on the relative merits of OSC vs. monochrome cameras lol.

My take/understanding: the separation of coloured pixels across the OSC sensor doesn't change the fact that you'd be limiting signal-to-noise on a per pixel basis, so Tom's comments regarding diminishing returns would still apply and most likely outweigh any benefit.

In my mind's eye, stacking and (strangely enough) seeing actually help us here: effectively, unless we have perfect seeing (and tracking) we'll get "dithering" of the signal across the sensor.

• Tulloch likes this

#8 Tom Glenn

Tom Glenn

Soyuz

• Posts: 3,786
• Joined: 07 Feb 2018
• Loc: San Diego, CA

Posted 06 April 2020 - 03:14 AM

• HarveyDeckAstro likes this

#9 Tom Glenn

Tom Glenn

Soyuz

• Posts: 3,786
• Joined: 07 Feb 2018
• Loc: San Diego, CA

Posted 06 April 2020 - 03:20 AM

And just to clarify the above, when I was arbitrarily mentioning a 25-35ms range for some of these longer f-ratios, I had Saturn in mind there, and less so Jupiter.  Near opposition, Mars gets so bright that you can actually get a good exposure even when using fairly extreme sampling.  Whether this helps in any way is debatable though.

#10 Kokatha man

Kokatha man

James Webb Space Telescope

• Posts: 16,381
• Joined: 13 Sep 2009
• Loc: "cooker-ta man" downunda...

Posted 06 April 2020 - 03:57 AM

...7 years ago in spectacular seeing we captured this image  https://momilika.net...htopMimbot.png

Remove the %C2%A0 suffix added in the address bar if a "404" appears...

It was at Sedan in the Murray Mallee: granted this was a mono camera but at the time I employed a variable amplification barlow I had constructed that kept very tight mechanical rigidity/collimation by means of 2 opposing compression-band clamps.

This allowed us to capture at significantly different f/l's in the same session where the seeing was of course extremely steady & I took the opportunity to really pull the camera back to ramp up the image scale during this session. We also did likewise on another night near that date at Langhorne Creek nearer to home: I would need to go through my archived HD's to find different f/l examples from both Sedan &  L. Creek for those nights but I'll post them if I can.....but the nub of this is that there were no benefits to the greater image scale captures where fine detail on Saturn's disk gave a very good gauge of resolution with different scales for us...

As I said earlier in Tom's thread, we've carries out a lot of practical tests (my particular forte) & generally once we see some consistency in our results (meaning finding advantages - or not) we tend to stick to that practise...but of course everyone might not find similar outcomes!

I do have some similar images from the ASI224MC when we first got one from Sam & will hunt around for them also if I can...but it's the nights at Sedan & L.C. that left an indelible impression for us about the lack of benefits for really extending the f/l too much beyond that (roughly) 5x rule of thumb...

• eros312, DMach, Tom Glenn and 1 other like this

#11 Tom Glenn

Tom Glenn

Soyuz

• Posts: 3,786
• Joined: 07 Feb 2018
• Loc: San Diego, CA

Posted 06 April 2020 - 04:14 AM

I always appreciate Darryl's practical evidence, as in the end, the "proof is in the pudding", so to speak.  For every person that claims you need to be at 7x or higher f-ratio, there are others who produce good results on the lower side.  Just thinking about my own images, I typically image the Moon with the ASI183, which has 2.4um pixels.  This would imply that you want an f/12 system (or higher), but I image with my C9.25 at f/10.  Granted this camera is a mono camera, but still, I'm at only 4.2x the pixel size for my f-ratio.  However, the images I have obtained under good conditions have resolution that IMO appears to "outperform" the scope aperture, as well as the sampling theory.  For example, if you blow up the following image and count the craterlets in Plato, there's not a lot of evidence that the "short" focal ratio hurt me here.

https://www.cloudyni...tember-30-2018/

The thread linked above also includes some information about the seeing conditions, including an example of an individual frame, as well as the quality graph.

Edited by Tom Glenn, 06 April 2020 - 04:17 AM.

#12 Tulloch

Tulloch

Skylab

• topic starter
• Posts: 4,189
• Joined: 02 Mar 2019
• Loc: Melbourne, Australia

Posted 06 April 2020 - 05:32 AM

Thanks Tom and Darryl, I understand that it's probably an unanswerable question, it's just one of things I've wondered about while sitting in the dark

When I was imaging with my Canon 700D which took images at 30fps (or 1/33 sec exposure time), I found I could get good results with my 6" SCT (at least similar quality results as for the ASI224MC taken at the same time), but only when the seeing was good and the amount of turbulence was low. As the wind picked up, the DSLR was less reliable and produced images that were not as sharp.

I appreciate the value of the real-world examples that you both are showing here, and understand how the 5x pixel size equation was determined. No problems with that. My original question was really asking why colour cameras with larger gaps between the different coloured pixels gets the same rule. There's got to be some sort of interpolation going on between pixels which will surely reduce the ability of the camera to resolve between peaks? As Darren said, this might turn into a mono vs colour sensor argument, but I'm just interested in the theory.

Andrew

#13 aeroman4907

aeroman4907

Surveyor 1

• Posts: 1,543
• Joined: 23 Nov 2017

Posted 06 April 2020 - 08:15 AM

Hi Andrew,

I work with the color version of Tom's setup, the QHY 183C, on a smaller scope - the 8" EdgeHD.  I my self have asked somewhat similar questions before, albeit more along the lines of what is the potential resolution of color sensors.  I also concur with Darryl's "proof is in the pudding concept".  While not being able to test directly against a mono sensor, I can make some observations regarding the performance of my color sensor.  My setup has an effective focal length of f/10.5, resulting in a multiplier of 4.375.  From a performance standpoint, I believe the color sensor performs very favorably with a comparable mono sensor.  I obviously don't have a filter when imaging the moon, but with 2x the green pixels, it appears to lean towards the performance of a green filter image.  The image below is presented in B&W around Ptolomeaus in one of the best seeing conditions I have experienced (solidly good seeing, but definitely not excellent seeing).  Craters are resolved down to 1km, which is the practical limit of my 8" scope.  One interesting thing of note is that deconvolution increases sharpness but also increases the size of features.  In the image below a 1km crater should theoretically be displayed as 2.5 pixels across at the native capture scale.  Of course it can't display a half pixel, so one might think that the crater would be displayed as 3 pixels across.  This again is not the result I have found either.  In general, the smallest lunar craters that I can detect in my images always have the feature of having to light pixels adjacent to two dark pixels.  That means a crater that is 1km across is being processed and displayed as if it was 1.6km in diameter.  Obviously this multiplying effect only occurs on the smaller craters, but interesting nonetheless.

One might look at this as credence to imaging at a higher pixel ratio as you have suggested so that resolvable craters would not be at 2.5 pixels across.  To get the 1km craters to cover three pixels would require using a 1.2x barlow in my setup (yielding a 5.25 multiplier), and getting the craters to cover four pixels would require a 1.6x barlow (yielding a 7x multiplier).  I have a strong suspicion that even using the 1.6x barlow the act of convolution would increase the apparent size of the crater somewhat.  Like others have said above, I think there would be a negative tradeoff with SNR.  And while an image produced with a 1.6x barlow in my setup would have a better match of lunar craters being presented more closely to their actual size, I believe the image wouldn't look sharp at that scale.  I would have to resize it down further to look sharper - likely to the equivalent of the 1.2x barlow with 1km craters covering 3 pixels.  As I primarily focus on creating whole moon mosaics, a barlow introduces concerns regarding getting consistent seeing over a longer period of time and additional hard drive space on my portable SSD.  Sometimes I have felt I would like to increase my capture scale further, but that is just to pixel peep on a monitor.  I like to do large prints (mostly of my landscape photography), but I have started to do some prints with my lunar images.  I would say that my experience has been that one can push printing to 150 ppi and have it look acceptably sharp when viewed as close as 2 feet.  For the whole lunar mosaic the attached image of Ptolomeaus, the image would print out at 58.6" high with very minimal background above or below the moon.  So from a printing standpoint, increasing that scale isn't really all that beneficial.

• Tulloch likes this

Recent Topics

 Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics