Shouldn't it be by pixel area?
Curtis, your question gives me the opportunity to address another very common misconception on these forums.
The sensitivity of a sensor is determined by QE and read noise. I am using the mV / micron measure I derive above as a proxy for QE. Hence combined with read noise it is a much more accurate measure of sensitivity. The role of pixel size is a little bit different as I explain below.
Pixel size itself is not the real measure of a sensor's sensitivity rather it is the lever that sensor manufacturers use to achieve a certain dynamic range and SNR. Basically, a low QE high read noise sensor will require larger pixels to generate the same SNR / dynamic range in the same exposure time as a sensor with high QE and low read noise.
The concept that pixel size = sensitivity was a rule of thumb which was useful for CCDs as the QE was low and read noise was typically very high. So larger photosites (i.e. sensor pixels) were required to collect enough photons to overcome read noise. The only way to increase "sensitivity" was a larger pixel. Unfortunately, the significant drawback was that you needed larger and larger scopes to image smaller objects due to the poor spatial resolution of sensors with very large size pixels. This was an arms race and the winners were astro vendors. You still need larger scopes to collect more light but now with my C8 I can image detail in ARP galaxies which would just not have been possible a few years ago.
With the advent of high QE and very low read noise CMOS you can have significantly smaller pixels and still achieve the same SNR / dynamic range. This provides much better spatial resolution (sampling) and much shorter exposures. It also allows imaging at longer f ratios without significant loss of SNR.
At this point a smart guy will ask me... well larger pixels should still improve SNR, right? Yes, that is correct but with the very low read noise of CMOS you can bin pixels in software and still achieve the same SNR. The photosite does not need to be designed to overcome read noise. You don't need physically large pixels.
A case in point. The 290mono can easily match or exceed my Lodestar mono in terms of sensitivity with better spatial resolution. One has 2.9 micron pixels and the other has 8.2x8.4 micron pixels - more than 8 times larger by area!
For this reason my ideal astro sensor would be a 1 micron pixel size, <0.5e read noise, 90%+ QE sensor. Then depending on the target and focal ratio I can decide what sampling I want using software binning. In fact this is an underappreciated paradigm shift. If I am an imager I don't need to decide what sampling I should use at acquisition time. I can make that decision in post processing. I can also use the extra spatial resolution data as an input to deconvolution algorithms to exceed detail possible with today's sensors.
Edited by Astrojedi, 01 September 2017 - 05:12 PM.