Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Help Me Optimizing My D5300 Capturing and Calibrating Workflow

astrophotography dslr
  • Please log in to reply
76 replies to this topic

#1 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 10 August 2020 - 06:35 AM

I have read several threads about subexposure length, calibration, read noise, gain, etc. but I am still confused / at a loss.

 

So far I have been "playing" in digital astrophotography since the end of January of this year.

 

Let's start with the current gear:

 

Mount: Sky-Watcher NEQ6 Pro

Camera: Nikon D5300 - astromodified, with UV/IR cut filter installed

Lenses: kit zoom lenses 18-105mm, 70-300mm, prime 50mm

 

I currently image from my front garden, in a Bortle 5 zone. Estimated sky quality: 19.64 magnitude, as per Clear Outside app.

 

SoonTM, I will hopefully have my new and improved, seriousTM setup: 80mm f/6 apo triplet, 0.8x flattener/reducer, 60mm f/4 guide-scope, ZWO ASI 224MC guide-camera, Optolong L-Pro 2" light pollution filter.

 

So, I feel that after 6 months of playing, winging the settings (1/3 to 1/2 of back of camera histogram) and imaging with awful zoom kit lenses (good for daylight photography, horrible for astrophotography), where my current bestTM image is a wide-field going from the Crescent Nebula to the Elephant Trunk Nebula, using the 50mm (stopped down at f/8, because I wanted finally pinpoint stars across the field - never could do that with the 70-300mm), in preparation of the new setup, I want to take my astrophotography to the next level. 

 

That said, I tried using PixInsight to measure my camera sensor specs, using the script BasicCCDParameters. This requires 2 bias frames, 2 dark frames (one 10x longer than the other), 2 flat frames. I took all 6 frames at 200 ISO, since, from my understanding, this is the best ISO for the D5300.

 

For bias, I used the fastest shutter speed available 1/4000s. For darks, I shot one at 60s, the other one at 600s (front covered, eyepiece covered). The problems started with the flats. I could never figure out how long these need to be. Some people say they need to be exposed so that the histogram is roughly 50%, when viewed from the back of the camera display. So, I tried this using my PC screen with a white background. For good measure, I took a 5s exposure as well, to see the full well capacity and compare it with PixInsight "Statistics". However, using "Statistics" in PixInsight, the 5 second exposure, using 16bit showed a mean, median, minimum, and maximum of 16383 (as expected, since the D5300 shoots at 14bit), but the back of camera half-histogram flat showed a median value of only 1818 (about 11% of full well). According to another resource for taking flats - Tutorial on how to take proper flats with DSLR - the correct ADU for mean / median needs to be half of the full well, when viewed in "Statistics", so, in my case about 8200, or thereabouts. This, however corresponds to a back of camera histogram of 75-80%, and even then I could only achieve a median value of 6453, so still falling a little bit short (39% of full well).

 

So, which method is the correct one for determining correct flat exposure for (my) DSLR?

 

Anyway, here are the results from BasicCCDParameters, using the longer exposed flats:

 

BasicCCDParameters - Results.jpg

 

So, there are 4 columns, one for R, one for G and one for B, plus a 4th one, which appears not to be the average, so is it for the luminance channel? Which of these do I use, for later calculations?

 

Assuming the 4th column, I have

 

- Gain = 0.913 e-/ADU (so, almost unity gain, as I expected, from the D5300 being iso-less at 200 ISO)

- Read noise = 2.594 e-

- Dark current = 0.029 e/sec

- Full well capacity = 14951.6 e (if I divide this by the gain, I get 16376 ADU, which is close to 16383, minus rounding errors, so this is expected/correct?)

 

Now, onto the other questions. What do I do with these numbers to determine the bestTM subexposure length for my sky conditions/telescope/DSLR combination?

 

I read in many places that the goal is to swamp the read noise by a factor that according to sources can be anywhere from 5*RN, 10*RN, 3*RN2, 10*RN2.

 

Quoting Jon Rista - I Need a Primer on Read Noise Calc (ASI1600) - we have:

 

 

 

I would use a slightly different formula. I have a mostly-written article on this, at some point my work will slow down and I'll be able to finish it. Anyway, to account for the conflicting needs to swamping read noise vs. not clipping stars, I advocate getting your signal to somewhere between 3xRN^2 and 10xRN^2. Ideally, the highest background signal you can get is of course better, but you want to balance that against clipped stars. So first off, you will want to calculate two levels, which would be your threshold. The basic formula is:

 

DN = (Nread^2 * Swamp / Gain + Offset) * (2^16/2^Bits)

 

Where:

 

DN = required background signal in 16-bit DN.

Nread = read noise in e-

Swamp = swamping factor

Gain = camera gain in e-/ADU

Offset = bias offset in ADU

Bits = ADC bit depth

 

This formula should work for any camera, not just the ASI1600. So, to calculate your absolute lower limit, the "never go below this" threshold for background sky, use a Swamp factor of 3 (the minimum I recommend going, period, even if  you are clipping stars):

 

DNmin = (1.13e-^2 * 3 / 0.15e-/ADU + 50) * 16 = 1209

 

To calculate the "ideal" background level, the one you want if you can achieve it, but would forego if you start clipping too many stars or by too much, is a Swamp factor of 10 (this is NOT a maximum, although going much beyond this has diminishing returns in terms of final SNR):

 

DNideal = (1.13e-^2 * 10 / 0.15e-/ADU + 50) * 16 = 2162

 

I suspect swamping by 10x at Gain 300 is probably going to be more difficult here. You could see what you would require at Gain 200:

 

DNideal = (1.3e-^2 * 10 / 0.483e-/ADU + 50) * 16 = 1360

 

And at Gain 139:

 

DNideal = (1.55e-^2 * 10 / 1e-/ADU + 50) * 16 = 1185

 

Now, since the gain is changing here, even though the required ADU count is LOWER at the lower gain settings, you will actually need LONGER subs to get those levels. Up to you to determine where your exposure length threshold is, and factor that into which gain you choose to use.

Now, I don't want to determine the best swamp factor to use, I just would like to know how to use the numbers from my camera. I don't have an offset, so what do I use in the formula? Also, this formula gives me the DNmin and DNideal, for a particular range of swamp factors. But then, what do I do with that? When I am actually taking my lights, I should play with different exposures until the measured background level is equal to - at a minimum - DNmin or - better - DNideal? But how/where do I measure this? In an out-of-the box RAW file, loaded in PixInsight and measured with "Statistics" (maybe after defining a small preview in the background sky)? Or does it have to be a calibrated frame? And if so, calibrated with what? Only dark? Only bias? Only flat? All of the above?

 

Few more questions, and then I think I am done, for now: what should I normally calibrate my light frames with, before starting my post-processing workflow? So far, I have only calibrated with flats (when I could figure out what exposure to take them at). Darks seem to make things worse, as I have no control over temperature. Is calibration with bias adviced for (my) DSLR? If so, just master bias or super bias? Should I do flat-darks?

 

Thanks in advance to anyone who will shed some light and help me optimize my capturing / calibrating settings.

 

Matteo

 

EDIT: Changed the title, adding "Help Me", so it would seem like a request for help - which it is - rather than a statement (which would seem like I am the one giving a workflow).


Edited by endlessky, 10 August 2020 - 08:14 AM.


#2 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 21,109
  • Joined: 27 Oct 2014

Posted 11 August 2020 - 10:27 AM

Subexposure.  As you can see, there are various opinions.  The crucial thing to know is that, once subexposure is in the ballpark, it ceases to be very important.  That role passes to total imaging time.  The most important thing is to shoot more subs.  <smile>  That's what gets you less noise and more detail (from more total photons collected), not subexposure time.   Subexposure becomes a tweak.

 

1/3 histogram, perhaps a bit less with this low read noise camera (stretched, as seen on the back of the camera) is "good enough".  

 

My refinement comes from Chris Woodhouse fine "The Astrophotography Manual".  It uses a basic range of (linear) average value (or skyfog peak, which one you use does not matter) = 5-10X read noise squared.   Bias (=offset) corrected.

 

Your read noise is about 3, squared is about 8, 5-10 X would be 40-80 electrons, about 50-100 ADU (14 bit) at ISO 200.  200-400 ADU (16 bit).  ISO 200 is chosen to maximize dynamic range without getting into Nikon strangeness at ISO 100.  Bias is 600 ADU (16 bit).  You need maybe 800-1000 ADU (16 bit) for a light.   I generally go for 1000.  But it's not very important.

 

And then shoot more subs.  <smile>

 

Woodhouse' book explains in more detail, talks about exceptions and how to deal with them.  Recommended.

 

Flats.  Here the crucial thing is to stay well away from the edges, which can be non-linear.  People translate that into 50%, but getting it exact buys you nothing.  Full well is 64000 ADU (16 bit).  So go for 32000 (16 bit).  The usual problem (rare, only with a lot of vignetting) is too dark corners, a little higher is better than a little lower.

 

Bottom line.  Do bias (I do 100, since they're so easy) and flats (I do 30).  Maybe darks.  It's easy (PixInsight encourages it) to get ridiculously over complicated here.  For the lights use ISO 200, go for 1/3 back of camera histogram (or a little less) or 800-1000 ADU (16 bit).  Go for 32000 ADU (16 bit) for flats.

 

And shoot more subs.

 

Other important points.

 

Flats don't work well without bias.  Math, resulting from the fact that you multiply by flats, not subtract.

 

Few cameras need dark flats, and the 5300 is not one of them.  They can work, but bias are far easier.  You just use the shortest exposure.  Watch out for light leaks.

 

Note that your darks are statistically indistinguishable from bias.  The 5300 has low thermal noise.  It's easy with an uncooled camera for darks to do more harm than good.  With my 5500 I only shot darks on warm summer nights, and the results were inconclusive.

 

Bottom line.  Get the basics right.  You need bias and flats.  Darks, maybe.  800-1000 ADU (16bit) for the lights.  32000 ADU for the flats.  Don't get lost in the details.  They're just not important here.

 

Shoot more subs.


Edited by bobzeq25, 11 August 2020 - 10:43 AM.

  • limeyx and endlessky like this

#3 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 11 August 2020 - 11:47 AM

Thank you, Bob! I have already read all that 4 or 5 times and I am still trying to digest it. I'll get back at you and try to break down in smaller pieces the remaining doubts that I might have.



#4 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 11 August 2020 - 12:53 PM

Subexposure.  As you can see, there are various opinions.  The crucial thing to know is that, once subexposure is in the ballpark, it ceases to be very important.  That role passes to total imaging time.  The most important thing is to shoot more subs.  <smile>  That's what gets you less noise and more detail (from more total photons collected), not subexposure time.   Subexposure becomes a tweak.

I agree. More total integration time is the most important thing. I just want to know - with numbers - if I am in the ballpark. Unfortunately, I have an engineering mindset and I just have to optimize everything. Well, a true engineer would forego a performance increase of that last 10%, if in order to achieve it, it would cost way more than it cost to get the first 90%. But since playing with numbers is free, might as well. Equipment aside, better use of the allowed imaging time is going to produce a better result. So, if I have a finite amount of hours to dedicate for one particular session, the only way I can improve is to break down those hours into optimal subframes.

 

It uses a basic range of (linear) average value (or skyfog peak, which one you use does not matter) = 5-10X read noise squared.   Bias (=offset) corrected.

 

Your read noise is about 3, squared is about 8, 5-10 X would be 40-80 electrons, about 50-100 ADU (14 bit) at ISO 200.  200-400 ADU (16 bit).  ISO 200 is chosen to maximize dynamic range without getting into Nikon strangeness at ISO 100.  Bias is 600 ADU (16 bit).  You need maybe 800-1000 ADU (16 bit) for a light.   I generally go for 1000.  But it's not very important.

So, the bias is what Jon calls offset in the post I quoted?

 

If that's the case, plugging in the numbers (using 5 and 10, instead of 3 and 10, to match your example) - I am not doing this to check your math, just to see if I am understanding the process and getting the same results.

 

DN = (Nread^2 * Swamp / Gain + Offset) * (2^16/2^Bits)

 

- Gain = 0.913 e-/ADU 

- Read noise = 2.594 e-

- Bias = Offset = 600 ADU (16 bits) = 150 ADU (14 bits)

 

DN_5 = (2.5942 * 5 / 0.913 + 150) * (216 / 214) = 747

DN_10 = (2.5942 * 10 / 0.913 + 150) * (216 / 214) = 894

 

If I round read noise to 3 and gain to 1, I get almost your same exact numbers (780 and 960), so I think I got it (correct me if I didn't, please). So, as you said, my uncalibrated light should read a mean between 800 and 1000. Same light, calibrated with bias, should be 600 lower, respectively, right? So between 200 and 400.

 

Woodhouse' book explains in more detail, talks about exceptions and how to deal with them.  Recommended.

I am reading it. I saw you quoted it so many times, that I went ahead and got it a while ago.

 

Flats.  Here the crucial thing is to stay well away from the edges, which can be non-linear.  People translate that into 50%, but getting it exact buys you nothing.  Full well is 64000 ADU (16 bit).  So go for 32000 (16 bit).  The usual problem (rare, only with a lot of vignetting) is too dark corners, a little higher is better than a little lower.

Here's where the problems arise and I get lost. If I open a 5 second, overexposed, maxed out flat in PixInsight, and check it with "Statistics", even if I choose 16 bit, I get 16383 as a result for mean, median, minimum and maximum (because the camera output is 14 bit). If I expose for 50% back of camera and check it with "Statistics", at 16 bits, I get only about 2000, which is roughly 1/8 of 16383. To get 50% of 16383 (about 8000, which would be x4 = 32000 in 16 bit), the back of camera histogram is pushed almost all the way to the right (I'd say it's at 75-80%). So, just to know what to do, do I trust the back of camera, or do I trust PixInsight and go for true half?

 

Flats don't work well without bias.  Math, resulting from the fact that you multiply by flats, not subtract.

 

Few cameras need dark flats, and the 5300 is not one of them.  They can work, but bias are far easier.  You just use the shortest exposure.  Watch out for light leaks.

 

Note that your darks are statistically indistinguishable from bias.  The 5300 has low thermal noise.  It's easy with an uncooled camera for darks to do more harm than good.  With my 5500 I only shot darks on warm summer nights, and the results were inconclusive.

Ok, I have routinely been doing flats because of dust spots. Possibly the reason they have not been working "out of the box" is because I didn't use bias (I have had to correct the master flat using PixelMath and adding or subtracting an offset, because they were either overcorrecting or undercorrecting - now I see why...). Glad I don't have to do dark-flats. I noticed too that my darks have very similar ADU to the bias, even the 600s one, done in my PC room, on a hot Summer day, with indoor temperature close to 30°C! So, I am glad I haven't bothered with darks and will continue to do so. But I will definitely add bias to my workflow. Bias or superbias? I seem to remember that superbias don't work well for DSLRs?

 

Bottom line.  Get the basics right.  You need bias and flats.  Darks, maybe.  800-1000 ADU (16bit) for the lights.  32000 ADU for the flats.  Don't get lost in the details.  They're just not important here.

Thank you again. You have been extremely helpful!

 

The most important thing is to shoot more subs.  <smile>

And then shoot more subs.  <smile>

And shoot more subs.

Shoot more subs.

Will definitely do! SMS for the win! grin.gif

 

EDIT: forgot to add an example. So, my bestTM photo so far, 3 hours total integration time, with the 50mm stopped at f/8, subexposures 4 minutes, Crescent Nebula to Elephant Trunk Nebula and everything in between. I just measured the Statistics of some subframes, and guess what? Mean ADU around 750-820. So, even when I didn't know what to do, the 1/4 to 1/3 histogram saved my behind! lol.gif


Edited by endlessky, 11 August 2020 - 01:00 PM.

  • bobzeq25 likes this

#5 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 21,109
  • Joined: 27 Oct 2014

Posted 11 August 2020 - 01:14 PM

Superbias is a tweak.  Some like it, some don't.  I'd leave it aside for now.  Shooting 100 bias is pretty good.

 

Just realize how you break your total imaging time into pieces is a tradeoff, and not all that important.  Get your exposure too short, too many subs, and you'll have some more read noise.  Will blur out the dimmest detail some.  Too long, too few subs, and you'll lose some dynamic range.  Star color won't be as good.

 

Just a tradeoff between dim detail and dynamic range, and one that might well be managed differently for different targets.  For things like globular clusters I'll go more toward 5 X RN^2, since dim detail is less of an issue than star color.  For nebulae, more toward 10X.

 

One reason why people disagree about the "optimum" subexposure.  The concept is inherently flawed.


Edited by bobzeq25, 11 August 2020 - 01:16 PM.

  • endlessky likes this

#6 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 11 August 2020 - 01:38 PM

If it wasn't complicated, we wouldn't be doing it, right?! lol.gif

 

Jokes aside, I get what you are saying. It is a compromise between pulling off faint details and / or saturating stars. I guess the best of both worlds would be to image separately for the star field and use a lower swamp factor. Oh, if only - clear - nights could be infinite and everlasting, right?


Edited by endlessky, 11 August 2020 - 01:39 PM.


#7 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 11 August 2020 - 02:44 PM

As for the four columns: the D5300 uses a sensor with a Bayer matrix, so in a 2x2 pixel block there are two green-filtered pixels, one red pixel and one blue pixel. The order is RGGB, so C0=R, C1=G1, C2=G2, C3=B. 


Edited by fmeschia, 11 August 2020 - 02:45 PM.

  • bobzeq25 and endlessky like this

#8 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 11 August 2020 - 02:54 PM

As for the four columns: the D5300 uses a sensor with a Bayer matrix, so in a 2x2 pixel block there are two green-filtered pixels, one red pixel and one blue pixel. The order is RGGB, so C0=R, C1=G1, C2=G2, C3=B. 

Thank you very much for this! So the headers in the columns are actually misleading, since they are marked R, G, B and -. That also explains why the two central columns have quite similar values. So, I guess I should take an average of the 4 columns, to do my calculations - since when I'll measure the ADU from the lights it will be in their bayered form.



#9 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 11 August 2020 - 02:56 PM

If you’re interested, I have created a nomogram (analog calculator with pencil and paper) to estimate the minimum subexposure time based on a number of parameters. You can find the nomogram and details here: https://www.cloudyni...oise/?p=9355427

 

Or just ask here if you have questions.


  • bobzeq25 and endlessky like this

#10 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 11 August 2020 - 02:58 PM

Thank you, taking a look right now - after I understand what a nomogram is, that is!

 

On a completely unrelated topic, by any chance, judging by your name, are you Italian / of Italian origins?



#11 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 11 August 2020 - 02:59 PM

Yes I am Italian (from Asti), emigrated to the US a few years ago.



#12 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 11 August 2020 - 03:05 PM

Com'è piccolo il mondo!



#13 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 11 August 2020 - 03:23 PM

Eh, davvero!



#14 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 21,109
  • Joined: 27 Oct 2014

Posted 11 August 2020 - 04:27 PM

If it wasn't complicated, we wouldn't be doing it, right?! lol.gif

 

Jokes aside, I get what you are saying. It is a compromise between pulling off faint details and / or saturating stars. I guess the best of both worlds would be to image separately for the star field and use a lower swamp factor. Oh, if only - clear - nights could be infinite and everlasting, right?

Yes, that would be good.  The processing would be "interesting".  <smile>  Woodhouse talks about methods to do that.

 

Data acquisition is no big deal.  Don't think in terms of one session.  Think about a project, where you gather data over multiple nights, using platesolving to get pointed in the same place, so you don't have to trim much of the edges.  That's been how I've done some of my very best images, since the old man doesn't stay up till dawn.  <smile>

 

Some people have been known to add data together from multiple years.


  • limeyx likes this

#15 asanmax

asanmax

    Vendor - DSLR Modifications

  • *****
  • Vendors
  • Posts: 431
  • Joined: 17 Sep 2018
  • Loc: Vancouver BC

Posted 12 August 2020 - 11:37 AM

 

Bottom line.  Do bias (I do 100, since they're so easy) and flats (I do 30).  Maybe darks.  It's easy (PixInsight encourages it) to get ridiculously over complicated here.  For the lights use ISO 200, go for 1/3 back of camera histogram (or a little less) or 800-1000 ADU (16 bit).  Go for 32000 ADU (16 bit) for flats.

 

 

I use the D5300 as one of my imaging cameras and the ISO number of 200 that you mentioned got me thinking of it.

I've always been imaging at ISO 800 and noticed that noise is not a big problem in post processing and stretching.

Wondering if you could share some examples of single and stacked images at ISO 200 and higher to compare. I would really appreciate it.

I just need to understand the benefits of going as low as ISO 200.

Thanks!



#16 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 12 August 2020 - 11:54 AM

I use the D5300 as one of my imaging cameras and the ISO number of 200 that you mentioned got me thinking of it.

I've always been imaging at ISO 800 and noticed that noise is not a big problem in post processing and stretching.

Wondering if you could share some examples of single and stacked images at ISO 200 and higher to compare. I would really appreciate it.

I just need to understand the benefits of going as low as ISO 200.

Thanks!

By going to ISO 200 you trade a bit of read noise (2.7 e- at ISO 200 vs. 2.2 e- at ISO 800) for a big gain in dynamic range (2 stops, i.e. 4x well capacity).



#17 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 21,109
  • Joined: 27 Oct 2014

Posted 12 August 2020 - 11:57 AM

I use the D5300 as one of my imaging cameras and the ISO number of 200 that you mentioned got me thinking of it.

I've always been imaging at ISO 800 and noticed that noise is not a big problem in post processing and stretching.

Wondering if you could share some examples of single and stacked images at ISO 200 and higher to compare. I would really appreciate it.

I just need to understand the benefits of going as low as ISO 200.

Thanks!

(+1 to the above, which came in while I was posting.  It's saying the same thing in a few words)

 

This is _really_ simple.  It's not so much about noise as it is about dynamic range, the difference you can record between dim things and bright things.

 

When you increase ISO you increase the gain of an internal amplifier.  In approximate but illustrative numbers, at ISO 200 one photon coming in will create one electron output.  At ISO 800 one photon creates 4 electrons.

 

But the amount of electrons is limited to a max.  At ISO 800 you run into the max sooner.  You've reduced the dynamic range, and stars will become saturated or clipped sooner.  When that happens they loose color, once you've maxed out R, G, and B, everything turns pure white.  Also, the bright stuff will all look the same, you'll lose detail in bright areas.

 

All this explained in more detail, with pictures, in this excellent book.

 

https://www.amazon.c.../dp/0999470906/

 

A possible wrinkle is you do need to expose your subs longer, and, if your mount is marginal, that could cause issues.  One reason why a good mount is so important.  But one hour of total imaging time still is one hour of total imaging time, your image will have the same signal to noise ratio.

 

Here's a website which says the same thing somewhat differently, and recommends 200 for a D5300/5500/5600.

 

https://dslr-astroph...trophotography/


Edited by bobzeq25, 12 August 2020 - 11:59 AM.

  • asanmax likes this

#18 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 12 August 2020 - 12:13 PM

When you increase ISO you increase the gain of an internal amplifier.  In approximate but illustrative numbers, at ISO 200 one photon coming in will create one electron output.  At ISO 800 one photon creates 4 electrons.

 

But the amount of electrons is limited to a max.  At ISO 800 you run into the max sooner.  You've reduced the dynamic range, and stars will become saturated or clipped sooner.  When that happens they loose color, once you've maxed out R, G, and B, everything turns pure white.  Also, the bright stuff will all look the same, you'll lose detail in bright areas.

 

Not to nitpick, but the number of electrons produced by one photon interaction event is not determined by the ISO setting but only by the laws of physics, and it is always at most 1 in our detectors (and less than 1 – the QE figure – as an average). The ISO setting determines how much the analog electrical signal read from each pixel is amplified before being converted into a digital number. So the bottleneck sand the limiting factor is actually the input swing of the A/D converter (I was incorrect myself talking about “well capacity” – the capacity of the well associated with each photosite doesn’t change, it’s the converter downstream of it that restricts how much of that capacity will be useable).

 

I apologize to bobzeq25 for being pedantic on this, because the gist of his explanation is correct. I cared about setting the record straight on the number of electrons only because the read noise figure is usually quoted in electrons, so it’s important to understand that the number of photon-generated electrons (which we need to consider if we want to “swamp” read noise) can’t possibly be changed by anything we can do as users.


Edited by fmeschia, 12 August 2020 - 12:22 PM.

  • asanmax likes this

#19 asanmax

asanmax

    Vendor - DSLR Modifications

  • *****
  • Vendors
  • Posts: 431
  • Joined: 17 Sep 2018
  • Loc: Vancouver BC

Posted 12 August 2020 - 12:43 PM

Thanks bobzeq25 and fmeschia. Well explained.

 

I have another question though. When I image at ISO 800 I have to do 6 to 8 minute exposures to get good histo at 1/3 from Bortle 8 with a CLS-CCD filter.

Now, if I drop the ISO to 200, I would have to increase the exposure time dramatically, to the point of no acceptance. Because, I can't really do more than 12 minute exposures, a slight guiding issue may cause too many bad subs.

Any suggestions?



#20 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 12 August 2020 - 12:53 PM

If you’re interested, I have created a nomogram (analog calculator with pencil and paper) to estimate the minimum subexposure time based on a number of parameters. You can find the nomogram and details here: https://www.cloudyni...oise/?p=9355427

 

Or just ask here if you have questions.

Hi Francesco, thank you for your chart. I took at look at it and I now understand how it works. Unfortunately, I have no idea on how to calculate the Delta-lambda / lambda term. My D5300 is astromodified, I removed the original filter and I replaced it with a UV/IR cut filter (manually cut from a 2" round filter). If it helps for determining the value of that missing term, this is the filter in question: PrimaLuceLab 77mm.

 

I would like to ask you another question: your chart is for a read noise to be less (or equal) than 5% of total noise.

 

How does this compare to the swamping factors?

 

What I mean is that with your chart I get an exposure value, in seconds, starting from some known values, the first one being the magnitude per square arcsec (something that I can only determine trusting the Clear Outside app, as I don't have a sky quality meter). Instead, using the "swamp factor method", I get - according to the swamp factor chosen - a range of minimum and maximum ADU values, that I can immediately check as the images are downloaded by my acquisition software. Using KStars/EKOS, I can do a test exposure and when the image is displayed, under the histogram the program displays the mean and median values in ADU (I just need to check if these values are exactly the same as the ones shown in "Statistics" in PixInsight, and if they are, I am good to go). So, when I do the test exposure, I read the values and if I am off from the "optimal" 800-1000 range (5*RN2 or 10*RN2), I can know immediately if I need to raise or lower the exposure and keep taking test shots until I get it good enough. So, what does a 5% residual noise correspond to, in terms of swamp factor chosen?

 

Thanks again,

Matteo



#21 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 21,109
  • Joined: 27 Oct 2014

Posted 12 August 2020 - 12:54 PM

Thanks bobzeq25 and fmeschia. Well explained.

 

I have another question though. When I image at ISO 800 I have to do 6 to 8 minute exposures to get good histo at 1/3 from Bortle 8 with a CLS-CCD filter.

Now, if I drop the ISO to 200, I would have to increase the exposure time dramatically, to the point of no acceptance. Because, I can't really do more than 12 minute exposures, a slight guiding issue may cause too many bad subs.

Any suggestions?

Yep.  Use 800.

 

Better yet, ditch the CLS <smile>.  One gathers dust on my shelf.  Bortle 7.

 

But that's another discussion.  Use 800.


  • asanmax likes this

#22 asanmax

asanmax

    Vendor - DSLR Modifications

  • *****
  • Vendors
  • Posts: 431
  • Joined: 17 Sep 2018
  • Loc: Vancouver BC

Posted 12 August 2020 - 01:15 PM

Yep.  Use 800.

 

Better yet, ditch the CLS <smile>.  One gathers dust on my shelf.  Bortle 7.

 

But that's another discussion.  Use 800.

Thanks, I will probably try and go low to ISO 200 with 10-12 min exposures just to see what it looks like.



#23 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 12 August 2020 - 01:31 PM

Hi Francesco, thank you for your chart. I took at look at it and I now understand how it works. Unfortunately, I have no idea on how to calculate the Delta-lambda / lambda term. My D5300 is astromodified, I removed the original filter and I replaced it with a UV/IR cut filter (manually cut from a 2" round filter). If it helps for determining the value of that missing term, this is the filter in question: PrimaLuceLab 77mm.

That’s an educated guess. I couldn’t find any information about the precise spectral sensitivity of the D5300, but I found several examples for other Nikon DSLRs (D5100, D300, etc.). I also found transmission charts for the OEM UV-IR cut filter used in several Nikon DSLRs, and for the LifePixel UV-IR cut that my modified camera uses. From all of this, I integrated numerically the values of the ∆λ/λ factor for the three colors of my H-alpha modified DSLR:

 

R 0.15

G 0.21

B 0.16

 

Since one wants to swamp noise even in the weakest channel, I use 0.15 as my “safe” ∆λ/λ factor.

All these are ballpark figures: I have no guarantee (or ways to verify) that my curves are accurate, nor that the spectral distribution of the sky is uniform (actually, it certainly isn’t). But they take you in the vicinity of where you want to be.

 

How does this compare to the swamping factors?

What I mean is that with your chart I get an exposure value, in seconds, starting from some known values, the first one being the magnitude per square arcsec (something that I can only determine trusting the Clear Outside app, as I don't have a sky quality meter). Instead, using the "swamp factor method", I get - according to the swamp factor chosen - a range of minimum and maximum ADU values, that I can immediately check as the images are downloaded by my acquisition software. Using KStars/EKOS, I can do a test exposure and when the image is displayed, under the histogram the program displays the mean and median values in ADU (I just need to check if these values are exactly the same as the ones shown in "Statistics" in PixInsight, and if they are, I am good to go). So, when I do the test exposure, I read the values and if I am off from the "optimal" 800-1000 range (5*RN2 or 10*RN2), I can know immediately if I need to raise or lower the exposure and keep taking test shots until I get it good enough. So, what does a 5% residual noise correspond to, in terms of swamp factor chosen?

My 5% figure corresponds to sky signal (in electrons) > 10*RN^2 (9.76 to be exact).


Edited by fmeschia, 12 August 2020 - 01:33 PM.


#24 endlessky

endlessky

    Messenger

  • -----
  • topic starter
  • Posts: 457
  • Joined: 24 May 2020
  • Loc: Padova, Italy

Posted 12 August 2020 - 02:24 PM

That’s an educated guess. I couldn’t find any information about the precise spectral sensitivity of the D5300, but I found several examples for other Nikon DSLRs (D5100, D300, etc.). I also found transmission charts for the OEM UV-IR cut filter used in several Nikon DSLRs, and for the LifePixel UV-IR cut that my modified camera uses. From all of this, I integrated numerically the values of the ∆λ/λ factor for the three colors of my H-alpha modified DSLR:

 

R 0.15

G 0.21

B 0.16

 

Since one wants to swamp noise even in the weakest channel, I use 0.15 as my “safe” ∆λ/λ factor.

All these are ballpark figures: I have no guarantee (or ways to verify) that my curves are accurate, nor that the spectral distribution of the sky is uniform (actually, it certainly isn’t). But they take you in the vicinity of where you want to be.

 

My 5% figure corresponds to sky signal (in electrons) > 10*RN^2 (9.76 to be exact).

Ok, thanks for the values. I tried with the estimated sky quality of 19.64 given by the Clear Outside app for my zone.

Other values: f/8 for f-ratio, 0.15 for ∆λ/λ, 3.9 micron for pixel size, that gives me 0.9 photons/s, then I reach 55% quantum efficiency (from photonstophotos website), cross the line of e-/s at 0.49, go to my noise read value of 2.6 (average of the 4 columns from my first post) and finally reach an exposure time of 2.2 minutes, or 2min 12s.

 

The mean / median ADU value measured from a 4min exposure that I took that night when I was using the 50mm at f/8 gives me 750-820, which should correspond to 5*RN2 swamp factor from the formulas in my other posts. So, there's probably something wrong either in the initial value of magnitude or on some other value that I wrongly assumed.

 

Anyway, I'll try again in my next usable session to measure some mean ADU values at different exposures, maybe in different areas of the sky, to see how much they vary for a given exposure time.

 

Am I right assuming that if I start imaging when an object is about 30° above the horizon and my exposure needed for 1000 ADU is t = x seconds, the exposure needed as the object rises more and more (and, by going to areas of the sky less influenced by light pollution) to obtain the same ADU will be t > x seconds? It would make sense, since in darkers skies the required exposure for the same swamp factor is usually a lot longer than the exposure needed in more light polluted skies.

 

If that's the case, since I am close to my rig while I image (no remote possibility, yet), it might be worth adjusting the exposure throughout the session to account for this "darker sky effect".

 

EDIT: corrected a word - the sentence above used to say "since I am close to my right".


Edited by endlessky, 13 August 2020 - 03:09 AM.


#25 fmeschia

fmeschia

    Apollo

  • *****
  • Posts: 1,454
  • Joined: 20 May 2016
  • Loc: Mountain View, CA

Posted 12 August 2020 - 02:43 PM

The mean / median ADU value measured from a 4min exposure that I took that night when I was using the 50mm at f/8 gives me 750-820, which should correspond to 5*RN2 swamp factor from the formulas in my other posts. So, there's probably something wrong either in the initial value of magnitude or on some other value that I wrongly assumed.

Are those ADU values downstream of calibration, or are those from the raw files?

 

 

 

Am I right assuming that if I start imaging when an object is about 30° above the horizon and my exposure needed for 1000 ADU is t = x seconds, the exposure needed as the object rises more and more (and, by going to areas of the sky less influenced by light pollution) to obtain the same ADU will be t > x seconds? It would make sense, since in darkers skies the required exposure for the same swamp factor is usually a lot longer than the exposure needed in more light polluted skies.

If that's the case, since I am close to my right while I image (no remote possibility, yet), it might be worth adjusting the exposure throughout the session to account for this "darker sky effect".

Probably, but that’s a level of precision that may be moot in front of the inherent accuracies in estimating ∆λ/λ, sky radiance, etc.

You just need to be in the ballpark.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: astrophotography, dslr



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics