Here is my 1st cut at a list of AP Concepts and Principles that have learned in my 1st year of doing AP (sorry about the long list, is much more readable in Word):
1) For imaging, I am concerned mainly with Field of View (FoV) for composition, image scale for detail of small targets, and Signal to Noise Ratio (SNR).
2) Deep Sky Objects (DSO) have a wide range of sizes, and no one setup can fit all of these targets. These range from Very Small: planetary nebulas (M57, 1'x1'); Small: most galaxies (M51, 11'x6'); Medium: most nebulas (M42, 90'x60'); Large: M31 Andromeda (180'x40'), North American + Pelican Nebulas (120'x120'), Cygnus Loop (180'x180'). By contrast the Sun and full Moon, are ~30' in diameter. Planets are very small, but brighter.
3) Not all targets are visible at any location, depending on the viewer’s latitude, time of the year (based on the DSO’s Right Ascension, RA, and viewer’s longitude), horizon obstructions (trees, buildings, mountains), and sky darkness (Bortle scale).
4) It is best to image DSOs when they have reached an altitude of >30deg, in order to minimize the amount of atmosphere the light has to penetrate. Best viewing is when the DSO is around the meridian, as it will be at its highest point in the sky.
5) A particular DSO’s altitude at the meridian is determined by its Declination (Dec) measurement compared to the viewers Latitude. Meridian Altitude = IF(Dec>LAT,90-Dec+LAT, Celestial Equator + Dec), all in degrees. The Meridian Altitude will be at the Zenith (directly overhead) if the DSO’s Dec equals the viewer’s Latitude. Celestial Equator is 90 minus the viewer’s Latitude.
6) FoV is determined by the combination of the scope's focal length (FL) and the sensor's sensor size. You can get larger FoV by a shorter FL and/or larger sensor size.
7) Image Scale (arcsec/px) is determined by the combination of the scope's FL and the sensor's pixel size. You can get finer image scale (smaller number, more resolution) by a longer FL and/or smaller pixel size.
8) There is no advantage to a finer image scale than determined by Nyquist Critical Sampling, as it results in no extra achievable resolution. Nyquist Critical Sampling is rate (frequency no higher than, or arcsec/px no smaller than) at which ALL of the embedded information (filtered signal) can be reconstructed.
9) Nyquist Critical Sampling is a fraction of the size of the finest detail (or multiple of the highest frequency component) of the filtered signal. I refer to this divisor of size or multiplier of frequency as the Nyquist Factor. Estimates of this Nyquist Factor are between 3 to 4 for 2-D images, and 2 for 1-D waveforms.
10) An estimate of filtered signal's finest detail is the smallest measured FWHM of any star. The true signal is filtered (or blurred) by seeing conditions, resolving power of the telescope, and the tracking ability of the mount.
11) Critical Sampling ~ SQRT(Seeing^2 + Dawes^2+ ...) / Nyquist Factor. (I am not sure whether to add the mounts error in quadrature into the prior equation, or to treat the equation as the limit with a theoretically perfect mount and degrade the estimate, since seeing conditions would also show up in the measurement of the guiding accuracy. This could also be the reason for the Nyquist Factor range.)
12) Seeing is a measure of blurring caused turbulence in the Earth's atmosphere, measured in arcsec. This is what causes stars to “twinkle”, like the image of a coin on the bottom of a swimming pool jiggles with the ripples on the water surface.
13) Dawes Limit is a measure blurring due to wave diffraction caused by size of the telescope’s aperture edge, measured in arcsec. The larger the diameter of the scope the sharper the image, and the lower the Dawes Limit. Dawes Limit = 116 / Aperture Diameter, in mm.
14) Reflecting telescope’s central obstruction (CO) will blur the image by more than estimated based on its aperture, as the CO (and supporting vanes) will take more of the light intensity out of the center airy disk peak and into subsequent bands, making stars larger. Supporting vanes from Newtonian telescopes will also move some of the light intensity into diffraction spikes.
15) I do not worry about undersampling (within reason) when imaging larger targets, as my limiting factor is fitting the target in the FoV of the sensor.
16) SNR = Signal / SQRT(Signal + Skyfog + DC + RN^2);
17) Skyfog is sky brightness caused by Light Pollution or the Moon’s illumination.
18) Skyfog is a large percentage of the noise, and it causes the peak in the histogram, and is also the median number. Much better SNR is achieved in darker skies (low Bortle scale), and good results can be attained at much shorter total integration time than in urban environments (high Bortle scale).
19) Dark Current (DC) increases as the camera sensor gets hot, and this heat increases the longer the exposure. This is why cooled cameras are essential for DSO photography. Non-cooled cameras will require many more, but shorter exposures, to keep this DC in check. Cooling is not necessary for the short exposures typical of solar, lunar, and planetary photography.
20) Read Noise (RN) is the electronic noise caused by the camera moving the image from the sensor to the saved photo file. This is composed of random variability and fixed pattern noise.
21) Shot Noise (SN) is the statistical variability in the actual flow of photon from the target. Photons arrive in quantized packets, but this flow is not constant. The longer the total integration time the captured signal converges to the true signal. SN shows up as speckles in the darker parts of the image, instead of the true smooth background.
22) Quantization Error is caused by differences between the number of the ADC’s intensity levels (e.g. 12-bit has 4,096, 14-bit has 16,384, 16-bit has 65,536) and the FWC of the sensor. This can cause differing photon counts to convert into the same output number.
23) SNR of a single DSO sub is very low, with signal just above the noise. This why many subs are needed, as signal accumulates linearly, while the noise accumulates as a SQRT, with the exception of RN, which also accumulates linearly.
24) F/ratio is the ratio of the scope’s FL divided by its aperture diameter. Faster f/ratio scopes (lower f/) are beneficial as they give each pixel more light per unit of time. Fast f/ratio scopes have wide diameter objective lenses (or mirror) compared to the scopes length. Fast f/ratio scopes are expensive and heavy.
25) SNR is determined by the total integration time, f/ratio and the transmission efficiency of the telescope and image train, and the QE of the camera sensor.
26) Transmission efficiency will be determined by the number of lens surfaces, quality of the lenses, quality of the mirrors, quality of the lens/mirror coatings, and degree of central obstruction. Generally, telescopes will have higher transmission efficiency than camera lenses.
27) SNR increases with the SQRT(total integration time). This is achieved with the combination of the length of the individual sub exposures, and the number of subs. SNR improves in a geometric fashion, with the same improvement going from 1 sub to 2 subs, as from 2 to 4, 4 to 8, …, 100 to 200.
28) Aperture affects the SNR through its impact on the f/ratio.
29) F/ratio "speed" can be achieved by a larger aperture (with the same FL) or shorter FL (with the same aperture), or a combination of both.
30) A “slow” f/ratio scope (high f/) will need longer total integration time than a “faster” scope.
31) The minimum desired sub length, is a one in which the all of the sensors pixels have values > 0 (no black clipping).
32) The minimum desired sub length is driven by the camera’s read noise (RN). Cameras with higher RN need longer subs in order to minimize the accumulation of RN in the entire stack, with the desire to “swamp the read noise”.
33) The maximum desired sub length is driven by the scopes f/ratio, the camera’s Full Well Capacity (FWC), brightness of the target, darkness of the sky, and the tracking/guiding ability of the mount. Longest sub lengths are possible with: darker skies, dimmer targets, high f/ratio scopes, cameras with a large FWC, and high quality mounts that are properly adjusted and polar aligned.
34) The ideal exposure time takes advantage of the sensor’s full dynamic range, with all pixels have values > 0 (no black clipping), and zero (or very few) pixels having a value of 1 (white clipping). Viewing the histogram, the line starts after the left edge, peaks (skyfog) in the left side (1/4 to 1/3 ideal), and trails off rapidly to the right edge.
35) CCD cameras generally have higher RN than CMOS camera, and typical CCD’s FWC is higher than a CMOS camera. This lead to exposure length rules of thumb significantly longer for a CCD camera than for a CMOS camera.
36) Dark calibration frames, with exactly matched camera gain (or ISO) and exposure length, characterize the camera DC. Many dark frames are captured (with lens cap on) and averaged. The average dark frame is subtracted from the averaged Light frame (actual photo of target) to remove the effect of DC.
37) Bias calibration frames have very short exposure times and characterize the camera RN. Many bias frames are captured (with lens cap on) and averaged. The average bias frame is subtracted from the averaged Light frame (actual photo of target) to remove the effect of RN. Bias (or Dark Flats) is also subtracted from Flat frames.
38) Flat calibration frames characterize the optical distortions in the image train. Many flat frames are captured (with an evenly illuminated target) and averaged. The average flat frame is subtracted from the averaged Light frame (actual photo of target) to remove the effects of optical distortions. These distortions include vignetting (darkening in the edges/corners) and smudges on the lens/filters, and dust on the sensor.
39) Larger apertures will increase exposure of point sources but not extended objections, assuming the same f/ratio.
40) Guiding/tracking accuracy is reduced by: heavier loads, long moment arm loads, large surface area loads (wind resistance), imbalanced loads, fine image scales, and mechanical flexure.
41) There are no meaningful advantages to binning on the camera for CMOS, other than smaller files and faster downloads (not an issue with DSOs). Any resolution reduction could be done in software by various means, with the advantage of being able use various degrees of the full resolution. If you bin at the camera level, the incremental resolution is lost forever.
Edited by SilverLitz, 18 January 2020 - 12:59 PM.