I would start with a key point that should clarify these things and keep the terminology consistent with technical literature on these topics: The only difference between noise and signal is what you are trying to measure in a given situation - it has nothing at all to do with randomness. What is signal in one context is noise in another. The terms are used with this flexibility for good reason - just as big and small change meaning depending on context. The only place I see an insistence that noise must be "random" is in amateur astro writings - including some amateur books. This is completely at odds with the way the terms are used in scientific and engineering contexts - particularly in imaging sensors.
With that clarified, it makes it easier to talk about one of the main terms involved in calibration - and that is "pattern noise." This is noise that appears possibly random or structured across the sensor - but it approximately repeats in each frame. Since it repeats, you can estimate what it is in each frame and subtract it. So - this is a form of noise in each frame - but it has a repeating pattern - and after subtraction you then have less noise.
If you insist that noise must be "random" - then there is no way to talk about what just happened - when a noisy image suddenly became less noisy. This is perfectly fine and captures a key motivation of the calibration process - to reduce noise by subtracting pattern noise.
A given pixel has several things that characterize it. It has a fixed offset value, and on top of that it has a random read noise with some sigma injected once when it is read, and it has a certain amount of dark current that accumulates over time. In addition, it has some particular sensitivity to light that determines how efficiently it converts photons to electrons. Each pixel will have its own values of these parameters - and the goal of calibration is to make them all behave the same way by removing the variation between pixels.
Separately, there is the concept of shot noise - which is present any time you have discrete objects arriving at a roughly steady rate - whether it's photons or rain drops. If on average you collect 100 of these in one second, if you actually measure the number in each second it won't be 100. It will be 91, 108, 97, 115 - etc. There will be a mean rate of 100, and a standard deviation of sqrt(100) = 10. There is no avoiding this due to the randomness of the arriving objects. The goal of imaging is to measure this average rate of arrival - but each image will have this random component in it. Since it is undesirable - it is noise. Shot noise.
How does random noise add? In quadrature. If you have meter sticks that have +/- 1 cm length error, and you put two of them end to end - the total length will not be 2m +/- 2cm - it will be 2m +/- sqrt(2)cm. And if you add 100 of them, the result will be 100m +/- 10cm. The random noise terms add with a square root.
There are two main aspects of imaging - one is to subtract noise terms that you can characterize - and thereby actually reduce the noise. That amounts to creating an image of the pattern noise and subtracting it. The other aspect is to stack many images together. This does not reduce the noise - it increases it - but the random noise will increase slower than any signal in the images - and the final SNR will increase. Both signal and noise will increase - but signal will rise above the noise because it accumulates differently, without the sqrt().
Pattern noise is present in the bias, darks, and flats - so you want to remove it from the sub exposures as well as you can. Then you can stack images that are as identical as possible, and only contain unavoidable shot noise.
That's an intro summary that I hope clarifies things.
Edited by freestar8n, 12 December 2014 - 08:25 AM.