I was ignoring thermal noise, transmission, central obstruction and small differences in FOV and pixel scale for semplicity.
I agree with everything you say, except you don't take into account the size of the pixels: a much bigger pixel in one setup (collecting light from the same "small piece of sky" in both setups) should be capable of much more light gathering - is that correct?
If that is correct and with everything else equal (FOV, resolution, pixel scale), then the advantage for HAC125 due to aperture (125/802 = 2.44x light gathering) should be compensated and overturned by such difference in pixel size (7.52/2.92 = 6.72x).
Yes, if you are just concerned about the pixel... a giant pixel is like subsampling an image after capture: You tradeoff pixel level resolving power for pixel level SNR but this does not increase the SNR of the target: That is dictated by the total number of target photons you collected which is the same in either case.
As stated in by another in another forum: "The reduced pixel level SNR associated with a higher resolution sensor is simply because the light/scene generated SNR is divided between more photosites"
You can do the same thing by standing further away from a displayed image: You lower resolving power but the smaller number of pixels your eye receives and is able to resolve will have lower noise.
Excuse me, sir, but everything I said is mathematically correct.
I don't think I said anyone did wrong math. It wasn't the math, it was the emphasis on the SNR of a pixel versus the SNR of your target using a normalized metric. When we do AP, are we trying for the best pixel or the best image of the target?
If it's the best SNR pixels, then yes, you are correct, it's all about the pixel. Big pixels is better SNR... of the pixel.
But I think the goal is the best SNR of the target. In that case, if you capture more photons, even if it's spread across more pixels due to a much finer pixel scale, that can still render a better SNR of the entire target even if the individual pixels are higher SNR because they are smaller "bites of the image". As long as you aren't read or thermal noise limited, it doesn't matter if you divide the image into 100 pixels, 10,000 pixels, 1,000,000 pixels or you just perfectly represent each captured photon as a "pixel", these all have the same image SNR even though you've subdivided it differently.
If the quality of an AP target result is the end goal, then I think any SNR discussion across systems should either normalize to the same pixel scale or normalize to the same sized patch of sky (I.e, the target).
To that end, your first answer was the simplest and most direct one:
"Since the pixel scales are almost the same, the largest difference will be in the aperture diameter of the two imaging systems. The larger aperture will produce a brighter image by a factor of (125/80)^2, so it will also have a higher SNR."
And to expand upon this first answer, you can even say this if the pixel scales are different because the larger aperture (if it has a smaller pixel scale) will still have a higher target SNR even if the pixel scale SNR is lower. To prove this, you can just subsample the image, after it is captured and stacked, down to the pixel scale of the smaller aperture scope and.... ta dah... you are back to the ratios of the apertures representing the SNR ratios, per your comment in blue.
Again, I was trying to switch the emphasis on the pixel level SNR to target level SNR.
Just look at these two equal sensor size full frame camera examples: ASI6200MC with 3.76um pixels and ASI2400MC with 5.94um pixels. I don't see experienced astrophotographers talking about the ASI2400MC producing higher SNR results of a target. They know that they both capture the same number of photons of the target, it's just one splits them amongst more photosites. People shouldn't worry too much about the SNR of each photosite.
This is why people generally don't recommend binning, even if you are somewhat oversampled, as long as you aren't drowning in data or stacking time. You can just downsample an image later if you desire. This capture pixel SNR thing isn't a factor in the consideration.
Again any SNR discussion across systems should either normalize to the same pixel scale or normalize to the same sized patch of sky (I.e, the target). Pixel level SNR discussions become meaningful if you are looking at the impact of pixel level SNR drivers like read noise and thermal noise.
Edited by smiller, 22 April 2025 - 04:56 PM.