Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Can we talk seriously about flat/dark/bias frames?

  • Please log in to reply
76 replies to this topic

#26 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 16 January 2015 - 12:52 PM

 

Darks can just be taken with the camers itself. Lens cap on, preferably in a very dark room, or covered by something to block out any potential stray light. Personally, at least during winter, I actually stick my 5D III or 7D in the fridge or freezer, after normalizing the temperature closer to the real ambient I had when imaging, and let it rip. ;P It's the only way I can get similar temperatures, and when the refrig/freezer door is closed, the lights go off and it's nice and dark in there. I would let the camera normalize in temperature for about 30 minutes, then do a warmup cycle to get the sensor temperature normalized (take a few dark frames that you'll just throw away)...then take your actual darks.

Haha, you should see the look on my Wife's face, when she comes home and finds a 30' USB cable running from the fridge into my office. "No honey, dinner will have to wait for awhile. The light in the fridge will ruin everything". On a serious note, thanks for all the info, Jon and others. I did not know that Bias and Flats were related. I ASSumed bias frames were only associated with darks, as they are frequently mentioned in the same breath. Good info! Looks like another thread to be pinned.

Regards, Kyle

 

 

LOL. Yeah, just wait till your running darks for multiple cameras at once! :p (I tried that once...BYE seemed fine with it at first...then it decided to start barfing all over the place.) 

 

Anyway, I am actually working on some of my own master darks again. I'm doing -6°C, 0°C, 6°C and maybe 12°C. I am going to be taking 10x20m darks for each, which will be integrated into the four masters. I'm choosing 20 minutes because with my current equipment, I will never be taking subs longer than that. I'm choosing to do only 10x, because with the darks being longer than any of the subs they will be calibrating, they will always have to be scaled down, which should suppress the random read noise much like stacking.

 

My primary goal with doing this, for now at least, is to use one of these master darks for cosmetic correction in PixInsight. I may or may not calibrate the lights with them...I guess it would depend on whether I'm seeing amplifier glow. I do have some glow, but I am generally not seeing it until I get to really high ISOs...maybe that's just because of the cold, and the overall thermal contribution is so low that it takes ISO 6400 and up to really make it visible. Anyway, cosmetic correction can help with removal of hot pixels. I'm spending a good deal of time at my dark site on dithering. The other night I spent about 50 minutes on the dithering process alone, which could have been better spent getting more lights (as much as 12 additional 240s lights, to be exact, a non-trivial number, and a non-trivial increase in total photons captured.) I am thinking I may start dithering every 3 frames (something BYE can do), so that I can still get some of the benefit...but use cosmetic correction on top of that to help deal with the hot pixels. 

 

Anyway, I thought I'd detail another option for you, the cosmetic correction path. All that really does is use a scaled master dark to identify the hot and cold pixels in the frame, and algorithmically correct them rather than correct them via dark subtraction. 



#27 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 16 January 2015 - 01:00 PM

I've got some default flats which I use if I'm too lazy to make some out in the field. These can be really handy and take care of a majority of vignetting. It does nothing for dust motes, of course. Now should the "generic" flat have a dust mote in it, then you're actually adding a dust mote to your image. So, be sure any generic flats you have are totally free of dust/debris.

Mark

 

I've ended up adding motes to my images at times. Before I started cleaning my sensor and filters and everything else off before each imaging session, I had these mobile dust motes. They would move around during my imaging sessions. I'd take flats, and the motes would be in the flats, but they would have moved again since taking my lights. I'd end up with multiple pitch black dust motes, then on top of that after calibration with flats I'd have these little glowing blobs (because when an extra dust mote is divided out, it lightens the area, sometimes by a LOT if the dust mote cast a dark shadow.) It was really annoying, especially when one of those would end up right on top of my DSO (especially with galaxies). 

 

So, yes, best to keep your motes immobile while imaging and taking flats. This could be a reason to not move your scope, tighten a t-shirt over the aperture, and take your flats before you bring anything in (if you regularly bring the scope in each night). Moving the scope might cause the motes to move, which would just end up being problematic. 

 

IMO, the best thing for me has simply been being diligent about cleaning. I don't know if it is the shutter, or the mirror, or what, but my motes...the ones that aren't actually things stuck on the sensor (which are thoroughly fixed in place and extremely faint, so never a problem)...seem to move around. So eliminating my motes with a blast of air (or the light touch of one of my LensPen brushes to any optics/filters) before each session leaves me with just the vignetting to deal with. 



#28 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 01:22 AM

I thought I would share the various calibration frames, to put some visuals behind the words and theory. First, biases. Below I have a single bias frame, the non-normalized & winsorized sigma clipped 100-frame master bias, and the superbias I generated from the master:

 

wT0RmHb.jpg

 

rgHGOKA.jpg

 

v6AIShe.jpg

 

A super bias is an artificial bias, based on your master bias, that has had the random noise component removed, leaving behind the fixed bias signal. You can see, compared to the superbias, how much random noise my 100 frame master bias still had. The highly collimated nature and non-uniform brightness across the frame are much easier to recognize in the superbias. I have not used a superbias before, however now that I'm trying to mosaic some of my Orion images, I'm finding that both the bias signal and the dark current signal ARE going to need to both be subtracted, as I can see the effects of both, such as the non-uniformity of the bias signal, in my lights and my final integrations (particularly the band at the bottom and the glow in the corner of the superbias.) 


  • Ohan Smit and newman like this

#29 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 01:35 AM

Here is a single dark frame, a 10x master dark frame created from 20-minute subs, and a calibrated master dark frame:

 

qULwjgC.jpg

 

hfZGE4o.jpg

 

JZECTyW.jpg

 

The difference with the calibrated dark should be quite clear. The vertical banding of the bias signal has been removed, and the dark has dropped in median level. The nature of the amplifier glow at the right edge is better revealed in the calibrated dark. Now, if you were calibrating lights without flats, you could simply subtract an uncalibrated master dark from each light, and that would be sufficient to fully calibrate the lights. The master dark includes the bias signal, so calibrating lights with just a master dark will take care of both the thermal signal as well as the bias signal. That would only work if you had ideally matched the temperature of your dark frames to the temperature of your light frames, such that dark scaling was not required. 

 

If you do plan to use dark scaling, then you would want to first calibrate everything by subtracting the bias, then scale the master dark, then subtract the calibrated and scaled master dark from the lights. Similarly, if you are going to calibrate with flats, you want to subtract the bias from the master flat before dividing it into the lights.

 


  • Ohan Smit, newman and ks__observer like this

#30 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 02:06 AM

Here is a single flat frame, calibrated with master bias:

 

3TKwbY6.jpg

 

The signal strength here is very high, near the upper limits of my 5D III. I have done an exaggerated stretch on this, even though it's a flat (and has tons of signal) to enhance the field artifacts. The image is a little noisy, which is the reason we stack flats as well. Here is a 20x stack of the flat frames (also exaggerated):

 

RziwHTC.jpg

 

You should be able to see some faint dust motes in there as well. These are my sensor's fixed motes...they don't ever seem to change, unlike the particulate that sometimes gets in there. The actual master flat looks like this:

 

YZ4bSlu.jpg

 

Now, this flat was generated from calibrated frames. All that needs to be done to apply it is divide the master flat from each light. 


  • Ohan Smit, newman and JonnnyFive like this

#31 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 03:08 AM

Finally, an example of what it looks like to calibrate with these master calibration frames. First off, a single original light frame:

 

lizRjDx.jpg

 

This frame is airglow limited, or at least mostly so, from a dark site of ~21.4mag/sq". I know it's mostly airglow limited because of the brownish background. As I start to become more light pollution limited, the background becomes grayer instead of brown. So this is a very good light frame, taken under almost the darkest skies I think I could find within 100 miles. Now, calibrated by subtracting the superbias:

 

faUet8j.jpg

 

You care probably not going to see any difference. I did measure the background sky level in both the original light and post-bias calibrated light. The difference in median red level was 61 ADU, which was dead-on for the median of the superbias itself (63 ADU red level). Mathematically the calibration with the superbias could be detected, but not so much with the naked eye (even on a stretched image). Now, the above image was calibrated by subtracting the bias-subtracted master dark:

 

D6Sd1Ws.jpg

 

Again, not much of a difference. You should be able to find the hot pixels in the previous frame...and note their absence in this frame. Additionally, the amp glow was clearly removed (although possibly overdone...there is an amp-glow shaped dark spot there now, which doesn't look quite correct to me, given how much I've been working with data from this region lately.) Finally, here is the bias and dark subtracted image after having the master flat divided out (new unlinked screen stretch on this one, since flat calibration lightens the image):

 

F3eAbb4.jpg

 

Huge difference. The entire vignetted cast of the previous image is gone, and we have a flat field which, previously appearing largely unpopulated, actually has some dust and nebular structure. Bias and dark didn't change much, mainly small scale cosmetic stuff. Flats made a HUGE difference. Hence the reason why I say, if you calibrate, calibrate with at least flats. 

 

One last version. An aggressive stretch:

 

Voj05v7.jpg

 

This reveals two things. First, it shows how much structure there really was in this region, and how much even a single frame captured. More importantly, it shows that the amp glow subtraction was indeed overdone. I did most of the calibration manually via pixel math, and my guess, given how much the amp glow decimated that one area, I simply forgot to scale the master dark. That is easy enough to do, if your interested in how it works. Simply divide the exposure time of your light into the exposure time of the dark, in my case 240/1200. That gives me a scale factor of 0.2. Multiply the master dark by that scale factor before subtracting it from the light frames:

 

Q0Racfs.jpg

 

Then subtract the dark, divide the flat, and you should end up with the proper calibration:

 

GsG8X12.jpg


Edited by Jon Rista, 17 January 2015 - 03:39 PM.

  • Ohan Smit, newman, Merk and 2 others like this

#32 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 04:00 AM

Wow this is the best thread I have seen on this topic.  Sadly I have another question regarding dark flats.  If I have my camera set to take a sequence of LRGB, and I take a dark after each of the lights in the sequence, how is the flat dark different than a dark?  I am under the impression that I should take my light flats at the same exposure, temperature, focus; through each filter, so what am I missing?  I assume something basic, but it seems to me that if my flats are directly related to my lights and my darks are too than I am not sure of the value of dark flats.  Does this make sense or am I that far off in wonderland?

 

Your a bit off in wonderland. ;) Flats are all about the field, not the noise. As such, matching exposure or temperature doesn't really matter. What does matter is that the imaging train remains identical, that orientation of all the parts in the imaging train (including the camera) remain identical, and that focus remains identical. Flats should be bright enough that they function properly. Dim flats will probably over-correct. 

 

I am guessing if you have been taking flats at the same duration as your lights, you must have been taking them at night, against the starry sky? That's probably not going to help you much. Flats should be illuminated, and evenly so. Personally I take my flats by pointing my lens and camera at a flat white evenly illuminated wall in my house. I use whatever exposure is necessary to get a bright flat, and temperature does not matter. The one thing that may matter is ISO...I always take my flats at the same ISO, although even that may not be necessary. The key is even illumination. You could use a light box or illuminated panel, those will probably give you the best results. You can also take a thin white t-shirt, stretch it taught over the aperture, point at a bright, evenly toned (and preferably fairly neutrally toned) area of twilight sky, and take flats that way. Whatever you do, try to get your average pixel level fairly high. At least 2/3rds illumination, but be careful not to clip any pixels. Clipped pixels will result in improper calibration. Bright, but not clipped. 

 

See above for visual examples of what a flat should look like, and how it affects the lights.


  • newman likes this

#33 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 431
  • Joined: 05 Jan 2014

Posted 17 January 2015 - 12:34 PM

Jon. This is totally awesome. This is by far the best explanation on this subject I've found. We should put this up on a website somewhere as a proper essay. Do you have one?



#34 newman

newman

    Vostok 1

  • -----
  • Posts: 148
  • Joined: 06 Dec 2010
  • Loc: Virginia

Posted 17 January 2015 - 12:42 PM

 

Wow this is the best thread I have seen on this topic.  Sadly I have another question regarding dark flats.  If I have my camera set to take a sequence of LRGB, and I take a dark after each of the lights in the sequence, how is the flat dark different than a dark?  I am under the impression that I should take my light flats at the same exposure, temperature, focus; through each filter, so what am I missing?  I assume something basic, but it seems to me that if my flats are directly related to my lights and my darks are too than I am not sure of the value of dark flats.  Does this make sense or am I that far off in wonderland?

 

Your a bit off in wonderland. ;) Flats are all about the field, not the noise. As such, matching exposure or temperature doesn't really matter. What does matter is that the imaging train remains identical, that orientation of all the parts in the imaging train (including the camera) remain identical, and that focus remains identical. Flats should be bright enough that they function properly. Dim flats will probably over-correct. 

 

I am guessing if you have been taking flats at the same duration as your lights, you must have been taking them at night, against the starry sky? That's probably not going to help you much. Flats should be illuminated, and evenly so. Personally I take my flats by pointing my lens and camera at a flat white evenly illuminated wall in my house. I use whatever exposure is necessary to get a bright flat, and temperature does not matter. The one thing that may matter is ISO...I always take my flats at the same ISO, although even that may not be necessary. The key is even illumination. You could use a light box or illuminated panel, those will probably give you the best results. You can also take a thin white t-shirt, stretch it taught over the aperture, point at a bright, evenly toned (and preferably fairly neutrally toned) area of twilight sky, and take flats that way. Whatever you do, try to get your average pixel level fairly high. At least 2/3rds illumination, but be careful not to clip any pixels. Clipped pixels will result in improper calibration. Bright, but not clipped. 

 

See above for visual examples of what a flat should look like, and how it affects the lights.

 

 

 

Jon,

 

Thanks for explanation on light flats, no need to have same time exposure of the lights themselves, just the right amount of illumination.  That helps a lot.  The visuals you posted of the frames above, with the walk through explanation was incredibly useful.  Your wealth of knowledge is impressive.  I am spending quite a bit of time in wonderland as I research this whole endeavor...but that's fine by me I think it's all fascinating.  You and several others like footbag, josh smith, david aulte to,name just a few have been incredibly helpful.  The whole CN community is invaluable to a newbie like me.  Can't wait to see the mosaic you are working when it's complete.


Edited by newman, 17 January 2015 - 12:49 PM.


#35 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 01:04 PM

Jon. This is totally awesome. This is by far the best explanation on this subject I've found. We should put this up on a website somewhere as a proper essay. Do you have one?

 

I'm building a personal astro site. It's taking a while..juggling that with astro, astro processing, regular work, side work, and all the standard duties of life....but I'll be sure to include a blog section and put this on there. :)


  • whwang likes this

#36 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13129
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 17 January 2015 - 01:56 PM

Jon:

 

Excellent series of images showing the effects of the various frames!  

 

There's one more thing that might help.  The effect of the flats is huge and obvious.  The effect of the darks and bias frames are less so, especially in the unstretched originals.  I think the need for those would be more clear after your "agressive stretch".  Would it be possible to just apply the flats, and then apply the same stretch as you did in your final calibrated image, without the darks or bias?  Comparing that to your image above would show how much they affected the final outcome.

 

-Dan



#37 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 02:25 PM

@Dan: I can try, once I get a free minute here. The subtraction of the bias is never really going to be visible...it's got an average level around 63 ADU, while the image has an average level between 3000-4000 ADU (16-bit). That is a very huge difference. The subtraction of the bias isn't really meant to improve the image data, either. It's primary purpose is to baseline everything, so that you CAN do things like dark scaling. The effect of the darks should be more apparent. I may have to re-save my images as PNG, so the hot pixels in the bias-only calibrated  image can be seen, and so the effect of the dark on those hot pixels is clearer. JPEG compression kind of killed that. 

 

One other thing. Subtraction with the dark will probably actually make things noisier, since outside of the hot pixels and amp glow, your subtracting random from random, which tends to increase the StdDev. If you don't have much in the way of hot pixels, and don't have amp glow, it really is better to just use dithering rather than darks because of that.



#38 Tonk

Tonk

    Cosmos

  • *****
  • Posts: 8828
  • Joined: 19 Aug 2004
  • Loc: Leeds, UK, 54N

Posted 17 January 2015 - 03:19 PM

One other thing. Subtraction with the dark will probably actually make things noisier, since outside of the hot pixels and amp glow, your subtracting random from random, which tends to increase the StdDev. If you don't have much in the way of hot pixels, and don't have amp glow, it really is better to just use dithering rather than darks because of that.


Or create a master dark with a very low random noise component. Those that favour vast dark libraries could attempt this approach - I've done it in the past (before dithering turned up) with master darks made from ~100 individual darks and it works very well.



PS - you need to fix this back in post #31

Then subtract the dark, divide the light, and you should end up with the proper calibration:


Edited by Tonk, 17 January 2015 - 03:22 PM.


#39 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 03:45 PM

True, you could take LOTS of dark frames for your masters. I think that, if your taking long darks to start with for the purposes of dark scaling, that ends up becoming impractical. I'm currently taking 20 minute darks. If I wanted 100 of them, that is around 35 hours of exposure time, at least (factoring in the need to download subs, the total time to complete taking 100 20 minute subs could be many hours longer than that). I'm having a difficult enough time maintaining consistent exposures with my DSLR in the freezer...dark frame temps are ranging from 2C to 6C, which is bit more of a swing than I wanted. It's different if you have a regulated CCD, certainly makes it a lot easier.

 

There is another issue with using huge numbers of subs. At least with PixInsight, generating a master bias out of 100 frames, for example, is extremely memory heavy. The ImageIntegration process has to load all of the frames into memory. I've been using 25-30 gigs of memory the last couple of days integrating biases...I have 16 gigs of physical. This has basically given me a reason to finally break down and upgrade to 32gigs of much, much faster memory...because it takes FOR-EVER to integrate these things. It kills my system at the same time...beyond about 50 frames, the system spends an exorbitant amount of time paging data out to swap (which is on a 300mb/s SSD even!!). So, there are practicalities involved in stacking LOTS of subs (particularly if they are RAWs...PI seems to handle FITs files a bit more efficiently...and maybe it handles XISF even better, so one might have the option of pre-converting RAWs to FITs before doing huge integrations.) 

 

If your scaling your master dark down enough for calibration, that also helps suppress random noise. That could be an argument for taking 10-15x very long darks, say 30, 45, or 60 minutes depending on how long your subs usually are...if your doing NB, you might want to go with 60 minute or even longer darks. Then you would always be scaling down a fair amount, and random read noise (which is generally constant on readout) should suppress quite nicely.


Edited by Jon Rista, 17 January 2015 - 03:52 PM.


#40 Tonk

Tonk

    Cosmos

  • *****
  • Posts: 8828
  • Joined: 19 Aug 2004
  • Loc: Leeds, UK, 54N

Posted 17 January 2015 - 03:56 PM

I'm currently taking 20 minute darks. If I wanted 100 of them, ...


Is it not a function of total integration time and not number of subs???



Doing a bit of recalling I was making large stack master darks back in 2005 when I was doing it the hard way with a very hot pixel challenged Canon 10D - frankly I wonder now how many of us doing very early digital astrophotography stuck at it - the 10D was only Canon's 3rd DSLR and the first that was actually practical for getting reasonable astro pics (the 2 earlier models were the D30 and D60 - both far too noisy for astro - I used a borrowed D60 for a while urrgh).

Edited by Tonk, 17 January 2015 - 04:02 PM.


#41 Tonk

Tonk

    Cosmos

  • *****
  • Posts: 8828
  • Joined: 19 Aug 2004
  • Loc: Leeds, UK, 54N

Posted 17 January 2015 - 04:09 PM

The ImageIntegration process has to load all of the frames into memory.


yeah - PI has poor memory usage algorithms! they really should look at the Photoshop plug-in API manual to see how image memory segmenting should be done for a much lower memory footprint. Even lowly DSS does a better job memory wise. Heck Images Plus was processing my mega stacks on vintage Dell desktops 10 years ago without getting anywhere near running out of memory (however processing times are another story!).

Edited by Tonk, 17 January 2015 - 04:11 PM.


#42 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 04:20 PM

I'm not sure how total integration time applies to darks. With lights, combining subs averages down the read noise and also strengthens the signal. So, when you stretch, it's as if you had taken one long exposure.

 

Outside of dark scaling, which is a bit of a sidebar...darks are just darks. You aren't really stretching them. My 20 minute darks have gobs of random noise in them. Stacking 10, 15, 20 of them doesn't really change that fact much. Given the testing done by Vicent Peris (he literally stacked up to 2000 bias frames), I think to really have an impact on random noise in darks, I think it's more a matter of sub count than total integration time...but, I could be wrong about that. I think, at least given all the experimentation I've done over the last couple of days, that taking a few very long darks then scaling them down (which is maybe the same thing as taking hundreds of very short darks and scaling them up? Bit skeptical...but I haven't tried it yet...), is a very effective means of reducing the random noise in the master dark relative to the lights. 



#43 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 04:25 PM

 

The ImageIntegration process has to load all of the frames into memory.


yeah - PI has poor memory usage algorithms! they really should look at the Photoshop plug-in API manual to see how image memory segmenting should be done for a much lower memory footprint. Even lowly DSS does a better job memory wise. Heck Images Plus was processing my mega stacks on vintage Dell desktops 10 years ago without getting anywhere near running out of memory (however processing times are another story!).

 

 

Yeah, memory management seems a bit brute force in PI's Integration module.

 

DSS sometimes manages memory well, but it has more than a few problems of it's own. It's 32-bit, and that seems to severely limit it. For example, I can only 2x drizzle very small regions of my images, and I cannot 3x drizzle anything. I have to create file lists, close DSS, then open it again and load the file list and immediately integrate to avoid memory problems when I'm integrating more than around 50 lights or creating a new master bias. If I don't do these things, and often on seemingly random occasions, I get the dreaded "blank error dialog", which according to a rather obscure page on DSS' site means there was a memory issue (why the guy can't actually put words into that dialog so it's not just a gigantic unknown is beyond me...  :confused:).

 

DSS memory issues were part of the reason I moved to PI. PI may be a bit brutish, but I've never seen it fail. Even if it needs 40 gigs of memory (it did once, when I was generating a new master bias, new master flat, and integrating 206 light frames on a Pleiades image), it still succeeds. :p



#44 whwang

whwang

    Mercury-Atlas

  • *****
  • Posts: 2622
  • Joined: 20 Mar 2013

Posted 17 January 2015 - 04:42 PM

I'm not sure how total integration time applies to darks. With lights, combining subs averages down the read noise and also strengthens the signal. So, when you stretch, it's as if you had taken one long exposure.

 

Hi Jon,

 

Just like photo electrons, dark electrons follow Poisson statistics, i.e., the error (in number of electrons) is the square root of the total number of electrons.  We can stack multiple shorter exposures, or take one very long exposure.  When the total integration time is the same in both cases, the result will be the same, if readout noise is not important.  When readout noise comes into the equation, one long exposure will work better than several short exposures.  Everything in the dark world works the same as in the light world.

 

One thing we need to realize is that, for the same length of total exposure, averaging multiple exposures (dark or light, doesn't matter) does not decrease readout noise.  The effect is the opposite!  It increases the contribution of readout noise.  If readout noise is what we really care, we should make as few readouts as possible.

 

Think about it, if we take one long exposure, we have 1 unit of signal and 1 unit of readout noise.  Here, the S/N is 1.  If we divide the exposure time into N sections as take N exposures, the strength of the signal in each of them decreases to 1/N, but the readout noise in each of the is still 1 unit.  In the averaged image of the N exposures, readout noise decreases to 1/sqrt(N).  As a result, the S/N is (1/N) divided by (1/sqrt(N)), which is 1/sqrt(N), worse than the 1 readout case.

 

Therefore, if the sensor and dark behavior is linear enough to allow for dark scaling, a good thing to do is to take longer dark exposures and then combine and scale them.  Astronomers sometimes use this trick to minimize the impact of readout noise when they take dark exposures.

 

Cheers,

Wei-Hao



#45 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 05:34 PM

@Wei-Hao: Aye, that is my understanding (and hence my skepticism in #43). I think I said read noise in my post, I meant photon shot noise. Apologies. (I rarely sleep because of chronic insomnia, I'm on a LONG stint of not sleeping right now (five or six days without sleep I think...it's all blurring together, I'm sick, pounding headach, yadda yadda...at least I have astrophotography to keep me busy at night.)) I think Tonk is referring to random noise in the dark current signal, which is a valid point, but I would be more concerned about the read noise. 

 

I should have run the math before (it doesn't really lie), but I'm still taking calibration subs, doing integrations, and working on a web site project. O_o I totally agree with the math though, which, in more explicit terms for SNR, is actually this (for those interested):

SNR = (S * n)/SQRT(n * (S + R^2))

If we take 20 darks with 200 ADU and 200 darks with 20 ADU, at 3e- RN the math works out to be:

(200 * 20)/SQRT(20 * (200 + 3^2)) = 4000/SQRT(4180) = 61.87:1 (Read noise contributes 180 ADU to noise generating signal)
(20 * 200)/SQRT(200 * (20 + 3^2)) = 4000/SQRT(5800) = 52.52:1 (Read noise contributes 1800 ADU to noise generating signal!!!!)

To Tonk's point, dark current does accumulate over time, and it has it's own noise. The formula is then:

SNR = (S * n) / SQRT(n * (S + D + R^2))

If we have 0.02e-/s dark current, and our 200ADU subs are 20 minutes and our 20ADU subs are 2 minutes, then we end up with:

(200 * 20)/SQRT(20 * (200 + 24 + 3^2)) = 4000/SQRT(4180) = 58.59:1 (Read noise contributes 660 ADU to noise generating signal)
(20 * 200)/SQRT(200 * (20 + 2.4 + 3^2)) = 4000/SQRT(6280) = 50.48:1 (Read noise contributes 2280 ADU to noise generating signal!!!!)

Despite the higher dark current accumulation, were still well ahead of the curve by taking longer darks and stacking.

 

(And sorry for repeating you Wei-Hao...I think it helps if people SEE the math work out in a practical example, especially if they are not math savvy. ;))


  • newman likes this

#46 FiremanDan

FiremanDan

    Aurora

  • *****
  • Posts: 4686
  • Joined: 11 Apr 2014
  • Loc: Virginia

Posted 17 January 2015 - 08:11 PM

Great info! I haven't really bothered with bias. That will change as of today. Two questions. I capture with nebulosity but I don't think I can capture at the fastest camera settings. I guess I could shoot RAW and convert to FITs. 
Second question is I set up in different spots all the time. Usually in pretty remote spots. What is the best way to shoot flats? I have been leaving my t-ring attached, go home and put my scope in front of a screen with a white blank page. So this should keep the optical path more or less the same. Is there a trick that I can do while I am out in the field. I have heard of shooting the sky for a flat, and I did that the other day following an imaging session. But it seems anytime I do set up before stars start to come out there are thin cloud bands. 
I'd really like to step up my calibration frames, but it's still very new to me. 



#47 mostlyemptyspace

mostlyemptyspace

    Messenger

  • *****
  • topic starter
  • Posts: 431
  • Joined: 05 Jan 2014

Posted 17 January 2015 - 08:32 PM

Great info! I haven't really bothered with bias. That will change as of today. Two questions. I capture with nebulosity but I don't think I can capture at the fastest camera settings. I guess I could shoot RAW and convert to FITs. 
Second question is I set up in different spots all the time. Usually in pretty remote spots. What is the best way to shoot flats? I have been leaving my t-ring attached, go home and put my scope in front of a screen with a white blank page. So this should keep the optical path more or less the same. Is there a trick that I can do while I am out in the field. I have heard of shooting the sky for a flat, and I did that the other day following an imaging session. But it seems anytime I do set up before stars start to come out there are thin cloud bands. 
I'd really like to step up my calibration frames, but it's still very new to me. 

 

Since it's important to not change the optical train or even rotate anything between flats and lights, you gotta do it on the spot. With my C8 I just put the white T-shirt over the dew shield, opened a blank Notepad.exe on my laptop and held it up to the T-shirt while the images shot. In AV mode, the exposures are maybe 0.5", so you can get all your flats in maybe a minute.

 

When just using my DSLR and EF lens, since the optical train is the same, I can make flats anywhere. So at home I just put a Kleenex over the lens, then opened up a white screen (like one of those flashlight apps) and held it over the lens. I just made sure it was at the same focal length and focus, so I just went outside, focused on a star manually, and left it there.



#48 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22887
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 17 January 2015 - 10:41 PM

So long as the optical train doesn't change, you can always take the scope off the mount, point it at a white wall illuminated by bright white light, and take your flats that way. The orientation of the components in the imaging train cannot change....the orientation of the imaging train to whatever is external to the scope CAN change. So, if you don't want to stick around at your dark site holding up a laptop to the scope, you don't have to.

 

One thing to note...broad spectrum lighting. You want to use a light that emits light across the spectrum, preferably as evenly as possible. An LCD screen is probably not going to be a broad enough spectrum to get the best results, and LED screens are definitely not.


  • Tonk likes this

#49 FiremanDan

FiremanDan

    Aurora

  • *****
  • Posts: 4686
  • Joined: 11 Apr 2014
  • Loc: Virginia

Posted 18 January 2015 - 07:38 AM

Hmmm. What about a fluorescent light? I have one in my kitchen. On my C6N the current focuser has threads on the outside for the t-ring. Then internal threads for the 1.25 inch adapter for eyepieces. So when when I use the eyepiece I never have to remove the T-ring. It's threaded down pretty tight. So.... even if I remove the DSLR it SHOULD end up back in the same spot. I am going to take a ton of bias frames today I think. 

Also a question about MASTER flats/darks/bias. I use Nebulosity. It seems that the program makes the masters for you? You go to batch process, select your flats, darks, bias, and lights. Then in the lights field you select which of darks, flats, bias frames you want. Then it processes for awhile, you have a new list of files usually with the prefix PPROC_*filename* Am I missing a step to create master calibration frames? 



#50 newman

newman

    Vostok 1

  • -----
  • Posts: 148
  • Joined: 06 Dec 2010
  • Loc: Virginia

Posted 18 January 2015 - 09:19 AM

Jon, others...

 

What at are your thoughts on the flat field generators that are out there, like Geoptik and others?  Obviously more money than a computer screen and a t shirt, but with its uniform light and ability to use it on your scope so that you don't need to move it at all, and possibly create motions that move the dust motes...could it make a difference?  I have seen them run for about $200 or so.  Just not sure if it's money well spent.  

Thanks.




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics