This appears stupid as we all know that a raw file captures more information than a Jpeg. Yes but....
If the full exposure is made up of many (say more than 20) short exposures then the problems with Jpegs are greatly reduced. If there is noise in the output of the senser (and there will be) the effect of averaging many frames increases the effective bit depth.
Consider the possible 8-bit values of 220 and 221 and lets say the real value of a pixel level is 220.3 (requiring more that 8 bits to digitize). If there was no noise in the pixel output one would always get the value 220 - just 8 bits. If, however there is noise in the output of the sensor the values that are saved in the Jpeg file will spread - certainly with values of 220 and 221 and most likely 219 and 222. The noise in the signal is producing this scatter. BUT, as the real value is nearer to 220, the average of many values will be closer to 220 than 221 and probably very close to 220.3. The effect is that the averaged result (in 16 or more bit accumulators within Deep Sky Stacker or Sequator) has more effective bits and the result in terms of bit depth may not be significantly less than if raw data were used.
Averaging many Jpeg frames also appears to reduce the artifacts that the Jpeg compression produces other wise I cannot see how I can get such good results just using Jpeg frames.
I have taken 50, 25 second, frames of the Hyades and Pleiades cluster with both Jpeg and raw using my Sony A5000 mirrorless camera and a superb Zeiss 45mm prime lens and processed them three ways: first just using the Jpeg frames, then the raw frames directly and finally using Tiffs derived from the raw frames using Adobe Lightroom. The stacked results had the light pollution removed and were identically stretched in Adobe Photoshop.
I can only say that the Jpeg result looked far more appealing and, in particular, showed up the nebulosity surrounding the cluster far better.
I believe that in producing the Jpeg files from the raw data produced from the sensor, many cameras carry out some stretching of the data and that this might have helped produced the better result. It could be that applying some stretching to each Tiff file that has been derived form the raw files might achieve a better result from the raw data.
One should always take both Jpeg and raw data when astroimaging if only to be able to quickly scan through all the frames and perhaps eliminate those that have suffered the passage of a plane or satellite.
My suggestion is that one should also process the Jpeg frames - you might be surprised.
Ian (I know that it appears that I am a newby but I rarely post on forums but have written two books on astroimaging.)