I agree with WadeH237 (mostly). Some call this effect "stride". With a 12 bit camera, stride is 16 as WadeH237 mentions. For a 14 bit camera, stride is 4 (on 16 bit fits format). If you expand the histogram of your 12 bit sub frame, you will see pixel quantities only at values of 16*n-1 where "n" is an integer (or roughly only near values divisible by 16).
The reason this goes away at integration is that at any pixel location in the image, integration will average the various 16*n-1 values. So for example, if a given pixel location for 20 sub frames has 7 subs at value 15 (16*1-1) and 13 subs at a value of 31 (16*2-1), the integrated value of that pixel would be 25.4 [(7*15+13*31)/20]. The resulting value is 25.4 for a floating point integration and probably 25 for an integer integration. Let the thought experiment expand over all the combinations of 15, 31, 47... for a given pixel location, all of the pixels in an image, and you can see how the "stride" issue does not significantly effect the integrated result given enough subs (and floating point integration helps further).
Incidentally, just by calibrating a sub frame with flats and darks (which themselves are integrated), "stride" will also be removed/reduced.
Edited by Jim Thommes, 15 May 2021 - 07:39 PM.