Andrew, thanks for the kind words and comments. You may be one of the few that reads through these ramblings in detail. Your post about color, and some of the responses contained therein, was partly the instigator for this analysis. I don't think I'm revolutionizing anything here. Rather, I think it's a revelation on how much spare time I've had during the COVID-19 lockdown. I've been editing raw lunar images that were assigned to sRGB and Adobe RGB for several years, and there has been no negative impact on the final outcome. What puzzled me all along, however, was that it always seems as though I had to make extreme adjustments just to recreate a natural view of the Moon. In hindsight, it's perfectly obvious: The images had no gamma applied to them in the color profile and so they were not displayed properly on the gamma 2.2 monitor. This explains the strong tonal curves required. The first curve simply had to undo the effects of the display gamma, and then on top of that any additional adjustments according to taste.
As for whether this could apply to planetary imaging, regarding detecting faint moons, all of the data should still be there no matter what color space you are working in, unless you have done something weird in Photoshop. So I'm not sure of any improvements there. This mostly has to do with my desire to better understand what is happening to the raw data from the moment it leaves the sensor to when you view it on screen. For images that are designed to be viewed in sRGB (such as JPEGs produced by every camera), the color profile embeds gamma data into the file, such that when your monitor displays it with a gamma of 2.2, it looks normal. This is a carryover from CRT days. A CRT has a natural built-in gamma of about 2.2 because of the non-linear relationship between the input and output of the electron gun that excites the phosphors in the screen. If the input signal wasn't gamma encoded, it would look very dark, and very different from the version the camera recorded. So the gamma encoding and decoding process neutralize each other. LCD screens are inherently linear, and don't require gamma, but for compatibility reasons this has carried over. Also, modifying the input versus the output gamma has some other benefits related to maximizing the display potential of 8 bit monitors, but that's a separate topic. But when you open a linear image with no color profile (and no gamma) in sRGB space, your monitor is expecting that the data is gamma encoded, and it still has it's output gamma, and so the image displays dark. This is equivalent to taking a normal image (JPEG) and opening in Photoshop, and then creating a Layers adjustment and setting the gamma to 0.45, which is 1/2.2. This recreates the effect that we see.
So this clears up the reason why raw images from astronomy cameras display abnormally dark when opened in Photoshop (this is especially obvious in lunar images, because of the dynamic range). In reality, they are not dark at all....they are simply being altered by the display. This has implications for histogram settings during imaging. The most obvious is that if you are trying to prevent pixel saturation in the raw image (which we are), then even a 50% histogram is only 1 stop under saturation, because the provided histogram is linear. So a 50% histogram is actually not low, and if you applied the correct gamma to compensate for your monitor, then the raw image would not be dark. In fact, it would likely appear washed out. Even a 25% histogram is only 2 stops below saturation. That said, for lunar imaging I still stand by my 75% histogram recommendation, but even a 50% recording would generate a very fine result. And you are correct, there is a huge amount of information buried at the bottom of a linear histogram. You just have to know how to retrieve it. A neutral gray card has 18% reflectance, which is a 46/255 reading in a linear histogram. So that means that anything below middle gray is recorded as below a value of 18% in your raw histogram. That's a ton of data at the very bottom.
As for your question about planetary captures in 8 bits and higher gain, everything still applies. I have looked at a few of my 8-bit lunar captures, and I was able to pull Earthshine detail from a "normal" exposure, so even 8 bits isn't prohibitive here, although there was slightly more noise. When stacking frames, the available detail is limited by SNR, or your ability to differentiate true details above the noise floor. When seeing is variable, stacked 8 bit files have equivalent bit depth to stacked 12 bit files. Usually, the background sky is above 0 anyway in the raw image, and so what you are really doing is measuring the SNR above the noise floor, rather than the dynamic range of the sensor itself.
Another goal I had was to gain insight into the processing that is done within a consumer camera, such as my Nikon, and a raw editor such as Adobe Camera Raw (ACR). I was interested to learn that on top of the 2.2 gamma transformation, there is an additional tone curve applied to the raw image by ACR (before you even see the image in the editor). That tone curve is below, and you can download a file corresponding to it from the RawTherapee user guide.
The linear 1:1 slope is hard to see in the image, but you can appreciate that this tone curve broadly increases the brightness of the image, especially in midtones and highlights, and it dips below the 1:1 line in the deepest shadows, which adds some contrast. I found this interesting, because I had empirically found that when editing a raw lunar image in sRGB space in Photoshop, the following curve is required to restore a nearly natural view.
My curve is much more aggressive, especially in the shadows, but this now makes perfect sense because it is effectively including both the initial gamma transformation, as well as an additional tone curve normally supplied by the Nikon profile settings. After using RawTherapee to get a glimpse under the hood of how ACR processes a raw photograph, I wondered if I could create a profile to do the same to lunar images taken with my ASI183mm. Indeed, you can, but the result is not appreciably different from the strong tone curve in Photoshop. If you attempt to use the ACR default tone curve with a 2.2 gamma in RawTherapee, you will find this to be too much, because those setting were defined to work with "normal" exposures from the Nikon, whereas the raw images taken with my ASI183mm more approximate a strong "expose to the right" technique. Therefore, I had to reduce the strength of the gamma in RawTherapee to 1.3, while keeping the same default ACR tone curve. You can see the results below. The image on the left is the raw linear file assigned to sRGB. The image in the middle was processed with nothing other than the tone curve shown above in Photoshop. And the image on the right was processed using RawTherapee using a combination of 1.3 gamma with the ACR default tone curve, and a slight change in the black levels. This can all be saved as a profile setting, and applied to other images, where it works pretty well as a default processing step (but so too does the Photoshop version). The Photoshop version has slightly brighter highlights, and the RawTherapee version has slightly better shadows. These could be equalized very easily though in Photoshop.
The main thing that I like about these specifically defined processing schemes is that they pass the sniff test for image manipulation. In scientific journals, you cannot manually (and subjectively) edit an image for publication. Any processing steps are limited to steps that are applied evenly across the entire image, and can be defined by simple mathematical formulas or protocols that can be easily reproduced by anyone who downloads your source code or profile setting. In general, no layer masks or manual selections are allowed. This is very different from some of the techniques you see on this forum, although those are perfectly acceptable for photographic art.