When EdgeHD first came out it caused a lot of commotion - but one complaint I remember in particular was that it had such a large field that you would need to spend a ton on a camera (at the time) to make use of it. This amused me because for some reason people want costs to be matched in components - rather than look at the cost of the overall system to achieve a certain goal. If one of the parts in the system is "wasted" on others - that is perfectly fine with me if that part just happens to be very inexpensive and the other components dominate the budget.
At the same time, it would *not* make sense to spend a ton on something that is limited by other components of the system.
I find a related scenario with low cost, high QE and inexpensive CMOS cameras that are available these days. There is a sense those pixels are "bad" because they aren't "matched" in some optimal sense. But it is easy to show this argument makes no sense at all - because in order to do a proper comparison you need to look at how the noise terms behave. And you need to keep in mind that you can always bin the pixels to make a new camera that is 2x or 3x the pixel size.
Much of this concern for "over sampling" and the resulting noise you will suffer I think dates from early CCD days when read noise was much higher. So here I am comparing an ASI1600 with 16803, which is 3.75um and 2e vs. 9um and 18e (the latter from http://www.astrosurf...oise/result.htm ). You can quibble over what the exact noise terms are - but the point is that they do matter in terms of realized performance. You can't just say "bigger pixels collect more light" and leave it at that. And here I ignore the 0.66 vs. 0.44 QE advantage of the 1600.
For the comparison I take a 2000mm fl system and image a "ridge" of light - so it is like a ripple in a rug - and look at cross-sections through it as I change brightness and fwhm. By looking at a ridge I keep everything 1D so I can plot it. This is a perfectly reasonable scenario where you are imaging a faint tendril of nebulosity.
Here is one example - with a fairly bright feature and 2" fwhm:
In that graph you can see there is plenty of light so that the first plot is very smooth and follows the underlying curve very well. If you are just making images and looking at the pixels - and you aren't smoothing - then a good figure of merit is how much the square edged plot of the pixels tracks the actual feature shape landing on the sensor. In this case the first plot easily wins and for this feature the other plots are worse in that regard.
Edited by freestar8n, 19 March 2019 - 05:28 AM.