I have read several threads about subexposure length, calibration, read noise, gain, etc. but I am still confused / at a loss.
So far I have been "playing" in digital astrophotography since the end of January of this year.
Let's start with the current gear:
Mount: Sky-Watcher NEQ6 Pro
Camera: Nikon D5300 - astromodified, with UV/IR cut filter installed
Lenses: kit zoom lenses 18-105mm, 70-300mm, prime 50mm
I currently image from my front garden, in a Bortle 5 zone. Estimated sky quality: 19.64 magnitude, as per Clear Outside app.
SoonTM, I will hopefully have my new and improved, seriousTM setup: 80mm f/6 apo triplet, 0.8x flattener/reducer, 60mm f/4 guide-scope, ZWO ASI 224MC guide-camera, Optolong L-Pro 2" light pollution filter.
So, I feel that after 6 months of playing, winging the settings (1/3 to 1/2 of back of camera histogram) and imaging with awful zoom kit lenses (good for daylight photography, horrible for astrophotography), where my current bestTM image is a wide-field going from the Crescent Nebula to the Elephant Trunk Nebula, using the 50mm (stopped down at f/8, because I wanted finally pinpoint stars across the field - never could do that with the 70-300mm), in preparation of the new setup, I want to take my astrophotography to the next level.
That said, I tried using PixInsight to measure my camera sensor specs, using the script BasicCCDParameters. This requires 2 bias frames, 2 dark frames (one 10x longer than the other), 2 flat frames. I took all 6 frames at 200 ISO, since, from my understanding, this is the best ISO for the D5300.
For bias, I used the fastest shutter speed available 1/4000s. For darks, I shot one at 60s, the other one at 600s (front covered, eyepiece covered). The problems started with the flats. I could never figure out how long these need to be. Some people say they need to be exposed so that the histogram is roughly 50%, when viewed from the back of the camera display. So, I tried this using my PC screen with a white background. For good measure, I took a 5s exposure as well, to see the full well capacity and compare it with PixInsight "Statistics". However, using "Statistics" in PixInsight, the 5 second exposure, using 16bit showed a mean, median, minimum, and maximum of 16383 (as expected, since the D5300 shoots at 14bit), but the back of camera half-histogram flat showed a median value of only 1818 (about 11% of full well). According to another resource for taking flats - Tutorial on how to take proper flats with DSLR - the correct ADU for mean / median needs to be half of the full well, when viewed in "Statistics", so, in my case about 8200, or thereabouts. This, however corresponds to a back of camera histogram of 75-80%, and even then I could only achieve a median value of 6453, so still falling a little bit short (39% of full well).
So, which method is the correct one for determining correct flat exposure for (my) DSLR?
Anyway, here are the results from BasicCCDParameters, using the longer exposed flats:
So, there are 4 columns, one for R, one for G and one for B, plus a 4th one, which appears not to be the average, so is it for the luminance channel? Which of these do I use, for later calculations?
Assuming the 4th column, I have
- Gain = 0.913 e-/ADU (so, almost unity gain, as I expected, from the D5300 being iso-less at 200 ISO)
- Read noise = 2.594 e-
- Dark current = 0.029 e/sec
- Full well capacity = 14951.6 e (if I divide this by the gain, I get 16376 ADU, which is close to 16383, minus rounding errors, so this is expected/correct?)
Now, onto the other questions. What do I do with these numbers to determine the bestTM subexposure length for my sky conditions/telescope/DSLR combination?
I read in many places that the goal is to swamp the read noise by a factor that according to sources can be anywhere from 5*RN, 10*RN, 3*RN2, 10*RN2.
Quoting Jon Rista - I Need a Primer on Read Noise Calc (ASI1600) - we have:
I would use a slightly different formula. I have a mostly-written article on this, at some point my work will slow down and I'll be able to finish it. Anyway, to account for the conflicting needs to swamping read noise vs. not clipping stars, I advocate getting your signal to somewhere between 3xRN^2 and 10xRN^2. Ideally, the highest background signal you can get is of course better, but you want to balance that against clipped stars. So first off, you will want to calculate two levels, which would be your threshold. The basic formula is:
DN = (Nread^2 * Swamp / Gain + Offset) * (2^16/2^Bits)
DN = required background signal in 16-bit DN.
Nread = read noise in e-Swamp = swamping factor
Gain = camera gain in e-/ADUOffset = bias offset in ADU
Bits = ADC bit depth
This formula should work for any camera, not just the ASI1600. So, to calculate your absolute lower limit, the "never go below this" threshold for background sky, use a Swamp factor of 3 (the minimum I recommend going, period, even if you are clipping stars):
DNmin = (1.13e-^2 * 3 / 0.15e-/ADU + 50) * 16 = 1209
To calculate the "ideal" background level, the one you want if you can achieve it, but would forego if you start clipping too many stars or by too much, is a Swamp factor of 10 (this is NOT a maximum, although going much beyond this has diminishing returns in terms of final SNR):
DNideal = (1.13e-^2 * 10 / 0.15e-/ADU + 50) * 16 = 2162
I suspect swamping by 10x at Gain 300 is probably going to be more difficult here. You could see what you would require at Gain 200:
DNideal = (1.3e-^2 * 10 / 0.483e-/ADU + 50) * 16 = 1360
And at Gain 139:
DNideal = (1.55e-^2 * 10 / 1e-/ADU + 50) * 16 = 1185
Now, since the gain is changing here, even though the required ADU count is LOWER at the lower gain settings, you will actually need LONGER subs to get those levels. Up to you to determine where your exposure length threshold is, and factor that into which gain you choose to use.
Now, I don't want to determine the best swamp factor to use, I just would like to know how to use the numbers from my camera. I don't have an offset, so what do I use in the formula? Also, this formula gives me the DNmin and DNideal, for a particular range of swamp factors. But then, what do I do with that? When I am actually taking my lights, I should play with different exposures until the measured background level is equal to - at a minimum - DNmin or - better - DNideal? But how/where do I measure this? In an out-of-the box RAW file, loaded in PixInsight and measured with "Statistics" (maybe after defining a small preview in the background sky)? Or does it have to be a calibrated frame? And if so, calibrated with what? Only dark? Only bias? Only flat? All of the above?
Few more questions, and then I think I am done, for now: what should I normally calibrate my light frames with, before starting my post-processing workflow? So far, I have only calibrated with flats (when I could figure out what exposure to take them at). Darks seem to make things worse, as I have no control over temperature. Is calibration with bias adviced for (my) DSLR? If so, just master bias or super bias? Should I do flat-darks?
Thanks in advance to anyone who will shed some light and help me optimize my capturing / calibrating settings.
EDIT: Changed the title, adding "Help Me", so it would seem like a request for help - which it is - rather than a statement (which would seem like I am the one giving a workflow).
Edited by endlessky, 10 August 2020 - 08:14 AM.