avarakin: "As far as I know, there is no single reason for binning with CMOS".
I am very familiar with CCD binning, where it increases pixel size, decreases resolution, and increases dynamic range. I was intrigued when the QHY600 BSI CMOS camera hyped binning, and the dramatically increased full well, but there didn't seem to be any on-chip way for the sensor to do that short of making four successive reads, at which the read noise/dynamic range argument goes out the window. And that is exactly what they do. So at that point, I was with you; CMOS binning is useless (except for reducing the image data that has to be downloaded).
But I got involved in a discussion in this forum on etendue recently. At the time, imaging etendue (extending the formal optical calculation to include a QE factor) was defined as:
Imaging_Etendue + Aperture^2 * Image_Scale^2 * QE
I and others pointed out a simple alternate arrangement that is perhaps better suited to AP comparisons:
Imaging_Etendue = 424.36 * QE * Pixel_Size^2 / Focal_Ratio^2
I edited the OP's very helpful spreadsheet to add the rearranged version, here:
And here is his thread:
Imaging Etendue is a way to assess how quickly photoelectrons accumulate, in other words, it's the system's imaging speed, allowing comparisons on how quickly extended objects such as nebula can be imaged.
Modern and especially BSI CMOS sensors can have read noise that is five to ten times lower than CCDs, so the added read noise of three extra reads (for 2 x 2 binning) isn't especially material. However, that larger pixel will accumulate photoelectrons at 4X the rate of unbinned pixels. I'm pretty sure that this increases imaging etendue, and makes the system faster at imaging extended objects. And this would be the case even if the "binning" was performed in post-processing.
Am I right, or am I missing something?
All the best,