Yes, of course it's true that the CCMs (colour correction matrices) are only approximations and are typically derived from calibrating a colour chart lit with known illuminants. But it's incorrect to imply that it doesn't work well for emissive light sources. Consider that terrestrial photographers are quite comfortable with capturing city nightscapes with their wide range of pretty light sources - no D65 illuminant there! The same argument applies to images of the cosmos.
Also, there is no need for data stretching to affect colour - that's the point of colour preserving stretches such as arcsinh stretch. But to be displayed correctly the data need the relevant colour space gamma applied for the profile attached to the data. Linear data needs a linear profile (which Photoshop and Affinity Photo do automatically behind the scenes in 32-bit mode) whilst sRGB and AdobeRGB need their respective gamma curves applied. This is where the "traditional" astro-processing workflow typically goes wrong - it displays linear data with a (non-linear) sRGB profile implicitly attached. This is one of the reasons that a crazy amount of stretching is typically required to make the image look at all reasonable.
Mark
Lots to unpack there Mark, which is why I said Color is an incredibly deep rabbit hole. 
But it's incorrect to imply that it doesn't work well for emissive light sources.
All light is, of course, emitted. What I'm getting at, is that a Macbeth chart will not reflect any wavelength that is not contained in its power spectrum. For example, a power spectrum that consists of one or two peaks (for example H-alpha at 656nm and H-beta at 486nm), will simply not reflect anything else (in other words this light would be bi-chromatic), as the radiant energy is only confined to those two peaks.
Consider that terrestrial photographers are quite comfortable with capturing city nightscapes with their wide range of pretty light sources - no D65 illuminant there! The same argument applies to images of the cosmos.
Indeed, night shots with different sorts of lighting in them (say fluorescent, LED and incandescent) look very odd (though pretty) and nothing like the human eye would perceive.
It's not that you can't capture city nightscapes, it's just that the colors are off if you color correct them with the wrong white point or wrong matrix that does not accurately match the illuminant (hence at the very least a few different lighting settings on most consumer cameras for the white balance to twist the D65 illuminant-based matrix to behave like a black body radiator at different temperatures). You will find that most basic nightscape photography tutorials, start with setting a custom white balance to compensate for the yellower power spectrum, which is also what our eyes/brain would do.
Also, there is no need for data stretching to affect colour
As you very likely know, Color is made up of Hue, Saturation and Brightness. You cannot have color without brightness; it is an integral part of how we perceive color. Affect brightness (by stretching) and you still affect how a color is perceived, even if you keep the RGB ratios constant. E.g. this has nothing to do with color ratio preserving stretches (see the "brown" video I linked to earlier). Color ratio preservation is neat and useful, but does not take into account perceived color change. E.g. it would be more useful to do this in a color space that is more psychovisually constant/mappable, like CIELAB space, rather than RGB space. This is in fact what StarTools does, but even that, of course, has its limits.
But to be displayed correctly the data need the relevant colour space gamma applied for the profile attached to the data
Unless I am misunderstanding you, RAW RGB tristimulus camera data does not come with a color space or profile; it is colorspace-less (or, looking at it differently "camera-specific"). It merely consists of digital representations of successful photon-to-electron conversions for red, (typically 2x) green and blue filtered pixels.
There is no stretch applied when converting Camera Space RGB to XYZ. A white balance is often applied to the RAW RGB, but that is a linear multiply (i.e. a matrix with just 3 values for each channel and the rest set to 0), not a non-linear stretch. Once in XYZ space, the matrix is applied. Only when the XYZ values are converted to the target color space, is a stretch applied.
Just so we're hopefully on the same page, this is what happens in an "ideal" RAW converter (as implemented by dcraw);
Camera space RAW RGB -> white balance -> XYZ -> camera matrix -> corrected XYZ -> target color space
so for sRGB, that would be;
Camera space RAW rgb -> white balance -> XYZ -> camera matrix -> corrected XYZ -> rgb (linear) -> gamma correction -> RGB (non-linear)
E.g. the only time a stretch is/should be applied, is at the last step when the target color space demands it.
All that is, unless, you are talking about an optional/proprietary extra tone mapping step "for flair", but this is obviously making the data no longer linear in a predictable way and voids it for use by algorithms like deconvolution, gradient removal, etc. (e.g. it is not universally reversible by simple color space conversion, nor does dcraw, for example, apply this).
A great "little" slide deck by M.S. Brown from the National University of Singapore that brings it all together is here. It is so incredibly comprehensive, yet simple that, if I ever would have the pleasure of meeting mister Brown, his drinks will be on me. 
Clear skies!