Just to clarify for any readers out there without need to interpolate numerous articles:
the resolution benefit of processing multiple images comes from two concepts:
#1 elimination of atmospheric distortion which necessarily increases clarity (bright subjects, fast exposures (usually video frames))
#2 a different way to increase resolution---taking advantage of sub-pixel level movement between images.
This comes from 2 parts:
A. in acquisition, a method of "dithering" needs to happen (either intentionally or accidentally). "dithering" happens when many images are acquired of the same subject,
but there are slight variations in the positioning of the sensor in reference to the subject,
so that different sensor pixels capture slightly different parts of the image each time.
B. in processing, a method of "drizzling" is utilized-where stacking of the different images (in sub-pixel precision) with sub-pixel variances gives rise to reliable sub-pixel resolution--or super-resolution.
Note a completely static subject that's light consistently falls on the same sensor pixels throughout acquisitions of many image subs, means no "dithering" has happened, and without such spatial variations between subs,
no super-resolution results can be had.....