Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Imaging settings and best practice

astrophotography beginner imaging
  • Please log in to reply
24 replies to this topic

#1 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 23 May 2020 - 04:53 PM

Hey all,

 

I imaged my first target with all new equipment and of course using all of this stuff for the first time is leading to a lot of questions.

 

My current setup

Mount: Sky-Watcher EQ6-R Pro

Imaging Camera: ZWO ASI2600MC Pro Color

Guide Camera: ZWO ASI120MM-S Mono

Telescope: Stellarvue SVX080T-3SV

Guide Scope: Stellarvue F050G

Power System: Pegasus Astro UPBV2

Control Unit: NUC

 

Software

Image Acquisition: APT

Guiding: PHD2

 

My target for the evening was M51 Whirlpool Galaxy. I set my camera to the following settings not because I thought it was best, but because I'm experimenting and don't know the optimal settings yet.

 

Camera Settings

Gain: 100

Offset: 50

USB Limit: 40

Cooling: I didn't realize you have to set this in APT, therefore, I was not cooling. 

Sensor temp: 76-77 F

 

I'm still learning Pixinsight, so I quickly ran the Batch Preprocessing script and stretched through the ScreenTransferFunction to get an idea of what my data looked like. For the most part, I ran everything as default.

 

Calibration Frames

Lights: 16 x 300s (75-78 F)

Darks: 10 x 300s (roughly same temp as lights 75-78 F)

Flats: 20 x 1/70s (77F)

Bias: 20 x 1/4s (77F)

 

With all of that said, I have some rather odd results.

 

Here's my first stretched image with channels linked in ScreenTransferFunction. Can someone explain all the red? Is that light pollution?

initial process channels linked
 
Here's that same image stretched with channels unlinked in ScreenTransferFunction. My initial thought is, this image is full of noise.
initial process channels unlinked

 

Here's a close-up showing all of that noise.

sensor noise
 
I noticed a lot less noise before stacking. Here's an example of a single light frame. While the noise is there, it looks a lot less "mottled."
single light frame
 

Here's a single dark frame stretched for reference.

single dark frame
 

Here's a single bias frame stretched for reference.

single frame bias
 

Here's a single flat frame stretched for reference.

single flat frame closeup

 

The first thing I noticed about my flat frame is that it looks nothing like my flats when using a DSLR. My DSLR images are gray/white and very light in color. This flat image is very dark and full of mottled noise. I wonder if this is causing the poor quality in my image? What settings do you usually use for dedicated cooled cameras?

 

I'd like to experiment with all of this for my next imaging session and would appreciate any advice you might have. Overall, I think I'm looking for direction on why I'm getting so much noise, how to reduce noise. Is gain causing this? Is it similar to when you increase ISO? What should my camera settings be as it relates to cooling temp., gain, and offset? Are my bias and dark frames looking ok? How about the overall stacking process? Does some of the poor image quality have to do with Batch Processing script? I attempted to use DeepSkyStacker, but my results were a challenge to stretch in Photoshop. I'm a little lost, but I do feel like I'm getting closer. Your thoughts on how I can improve my next image session?



#2 SoDaKAstroNut

SoDaKAstroNut

    Explorer 1

  • -----
  • Posts: 63
  • Joined: 24 Dec 2018
  • Loc: Black Hills, South Dakota

Posted 23 May 2020 - 05:06 PM

You can skip bias for OSC.

 

Gain increase will add noise to your subs, use it judiciously.

 

The flat is underexposed - you should see white/light gray center, dust motes, and vignetting in corners/edges.


Edited by SoDaKAstroNut, 23 May 2020 - 05:09 PM.


#3 jdupton

jdupton

    Vanguard

  • *****
  • Posts: 2,452
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 23 May 2020 - 05:14 PM

Burt (thelosttrek),

 

   I have two quick observations.

 

   The first is that the Red color in your first (un-linked STF) image is not due to anything you did in processing. It comes from having a weak Red content in your Flat Light source. It is of no consequence and will be naturally removed as your move on through your processing. It will be removed if you do any of the following processes while still linear -- Background Neutralization, Dynamic Background Extraction, or Color Calibration.

 

   I see an oddity in your Flat frame. In addition to looking strange for a Flat frame, it is in color. The Flat frame should be in raw FITs format when it is used. Maybe the flats were captured differently from the Lights, Darks, and Bias. You should check that.

 

 

John


Edited by jdupton, 23 May 2020 - 05:15 PM.


#4 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 23 May 2020 - 05:20 PM

Most importantly, you're not doing badly, at all.

 

IMPORTANT.  Do _not_ omit bias, an alternative is dark flats.  Without one of those flat correction does not work properly.   The underlying reason is that flats are at a very different level, and you divide by them, not subtract.

 

You'll see a lot of strange ideas on the Internet.  Short posts are not the best tools for understanding.  The antidote is good books.  I highly recommend this.

 

https://www.amazon.c.../dp/0999470906/

 

I agree with jupton.  Flats should not be "color" (debayered).  It can help to do the calibration process in PI manually before using BPP.  So you'll know what's going on.  Another book, _extremely_ useful.  Shows the workflow, explains why.

 

https://www.amazon.c...5/dp/3319976885

 

One shot color cameras have some noise.  Uncooled OSC have more.  80 minutes is not a lot of total imaging time, which is your main tool for reducing noise.  Later, you'll be trying out "dithering", another tool.  No need for the complication right now.

 

It's hard to tell anything from Cloudy Night jpgs.  If you upload one of each type of frame direct from the camera (unstretched, untouched <smile> ) to something like Dropbox and PM me, I'll take a look at them.

 

Below is a stack from my site, outside a big city.  Ignore the fact that your background is red and mine blue, it's completely irrelevant.  That's light pollution.  Then the stack after one pass of AutomaticBackground Extraction.  You _must_ change the default correction of "none" to subtraction.

 

Magic.  <grin>  Unlinking the STF is a bandaid, which does nothing at all to your data.  ABE actually reduces the effects of light pollution (some).

 

While you're getting your feet on the ground, it can be useful to image something simple, like a star cluster.  Easier to diagnose issues.  There will be issues.  <smile>

 

ABE exampl before.jpg

 

ABE example after.jpg


Edited by bobzeq25, 23 May 2020 - 05:32 PM.


#5 WadeH237

WadeH237

    Fly Me to the Moon

  • *****
  • Posts: 5,665
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 23 May 2020 - 05:54 PM

Here's my first stretched image with channels linked in ScreenTransferFunction. Can someone explain all the red? Is that light pollution?

Strong color casts in the background, like your red or Bob's blue are not in any way related to light pollution - unless that's what your sky looks like when you look at it naked eye.

 

I also disagree with John that it's a flat fielding artifact.  I would bet that it's happening when the images are integrated.  As part of the integration, PixInsight will need to normalize all of your images so that their background, signal and noise levels are comparable.  What I see happen fairly often is that one of the color channels normalizes to a different background level than the others, and it takes very little difference to have a dramatic effect.  Here are the possible cases:

 

  • If the blue background is a little higher, you get a strong blue cast.
  • If the red background is a little higher, you get a strong red cast.
  • If the green background is a little higher, you get a strong green cast.
  • If the blue background is a little weaker, you get a strong yellow cast.
  • If the red background is a little weaker, you get a strong cyan cast.
  • If the green background is a little weaker, you get a strong magenta cast.

 

You can see this for yourself by splitting the channels into red, green and blue and then running the Statistics process on each of them.  Look at the median pixel value for the image and compare them.  You'll see how they map to the color casts that I described above.

 

The difference between linking and unlinking the channels in the STF autostretch is whether STF applies an identical stretch to all channels (where you see the color cast), or whether it independently calculates what it thinks is the right stretch for each channel individually (where you will see little or no color cast).

 

If you want to prevent these casts, it is possible to configure ImageIntegration to not do the normalization, but this is not a good idea because it reduces the ability of integration to detect outlier (noisy) pixels.  The color cast is a product of the math used to integrate the images and is completely harmless.  It can be fixed with ABE, DBE, ColorCalibration or even manually with PixelMath.

 

True light pollution will appear as large gradients, subtle color casts, but most often it just acts like fog and destroys contrast in the faint details while letting high contrast details (like stars) show up fine.



#6 nimitz69

nimitz69

    Vanguard

  • *****
  • Posts: 2,091
  • Joined: 21 Apr 2017
  • Loc: A barrier island 18 miles south of Cocoa Beach

Posted 23 May 2020 - 05:54 PM

In addition to everything else suggested, I doubt you really need 5 min subs unless you are imaging from a very dark site. I would suspect that 2-3 min subs are more than sufficient to swamp read noise given your sky conditions, equipment and target.

Total integration time trumps everything so that’s what you want to max out. 80 mins is not a lot of time. Depending on your LP you’ll want 4—6 hrs min. 6-10 hrs would be even better
  • bobzeq25 and ks__observer like this

#7 jdupton

jdupton

    Vanguard

  • *****
  • Posts: 2,452
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 23 May 2020 - 07:02 PM

Wade & Burt,

 

I also disagree with John that it's a flat fielding artifact.  I would bet that it's happening when the images are integrated.  As part of the integration, PixInsight will need to normalize all of your images so that their background, signal and noise levels are comparable.  What I see happen fairly often is that one of the color channels normalizes to a different background level than the others, and it takes very little difference to have a dramatic effect.  Here are the possible cases:

   This is very easy to test.

 

   Take one frame after calibration and DeBayer it. Look for the color cast. If it is not in the single frame, then Wade is correct and it may be an artifact of Image Integration. However, if you take that single DeBayered frame and find the color cast, it came from Image Calibration and not integration (since it has not yet been integrated).

 

   Prior to Calibration and DeBayering, the Light frame image and the Flat frame image each have a certain mean. You can look at this for a raw uncalibrated frame with the Statistics process. Look at the mean of all raw pixels. Do the same for a flat. The Flat frame histogram will likely show multiple peaks. In Burt's example, I would expect the Red channel to be the lower ADU peak. We cannot see which colors correspond to which peaks until after DeBayering but it is worth just looking at the histograms and statistics for each frame type.

 

   As an explanatory example, if we assume some flat has a Mean of 30000 ADU with peaks at 33000 ADU for Blue and Green and 24000 ADU for Red, the weaker channel, then we can see what happens in Image Calibration with the flat. The Light frame will have the three color channels adjusted by division. (This all happens before DeBayering.)

  • Red CFA Pixels = Light / (24000 / 30000) = Light /  0.8 = Light * 1.25
  • Green CFA Pixels = Light / (33000 / 30000) = Light /  1.1 = Light * 0.91
  • Blue CFA Pixels = Light / (33000 / 30000)= Light /  1.1 = Light * 0.91

   As can be seen, if the Light started out neutral gray in color (all pixels in the background having roughly equal ADU values), then the calibrated Red pixels become brighter and the Green and blue Pixels become Darker in the calibrated Light when compared to the uncalibrated original Light. When we DeBayer, the Calibrated Light then shows the Red overall cast due to the Flat Calibration with the weaker Red channel. The cast comes from the spectral makeup of the Flat lighting source and not the sky or other source.

 

   This is always true. Weaker channels in the Flat will induce that color cast into the Calibrated, DeBayered Light while stronger channels in the Flat become weaker in the Calibrated, DeBayered Light. This all goes away once you have done your gradient removal and color calibration.

 

 

John


Edited by jdupton, 23 May 2020 - 07:22 PM.

  • rgsalinger and bobzeq25 like this

#8 WadeH237

WadeH237

    Fly Me to the Moon

  • *****
  • Posts: 5,665
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 23 May 2020 - 07:33 PM

Take one frame after calibration and DeBayer it. Look for the color cast. If it is not in the single frame, then Wade is correct and it may be an artifact of Image Integration. However, if you take that single DeBayered frame and find the color cast, it came from Image Calibration and not integration (since it has not yet been integrated).

This is a good test to distinguish between the two cases.

 

Since I rarely shoot with a one-shot-color camera, I don't normally work with a master flat that has a Bayer matrix (which is what causes the multiple histogram peaks for color cameras).  I do frequently see the distinctive cast that is either a primary color or the complementary of a primary color in my own integrated images (taken with a mono camera and filters).

 

I keep meaning to pick up a OSC camera, so that I can be much more familiar with them and their issues, since so many people are using them...


  • jdupton likes this

#9 jdupton

jdupton

    Vanguard

  • *****
  • Posts: 2,452
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 23 May 2020 - 07:44 PM

Wade,

 

   Shooting with OSC cameras may be a bad habit to get into. wink.gif  I have both camera types and find myself shooting more the past year with the OSC. I find processing OSC more complex (partially because of side effects like the color casts from calibration) but processing Mono is more tedious and repetitive. I'm mostly lazy and find OSC easier to capture but more work later in processing while Mono is harder for me to capture but easier to process. As everything in this hobby, it is a trade-off.

 

   Bottom line, if you don't have a strong reason to do more OSC work, you might not want to start.

 

 

John



#10 WadeH237

WadeH237

    Fly Me to the Moon

  • *****
  • Posts: 5,665
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 23 May 2020 - 08:42 PM

Bottom line, if you don't have a strong reason to do more OSC work, you might not want to start.

Oh, I am satisfied with my current workflow with the mono camera and full set of filters.

 

My sole interest in getting a one-shot-color camera is because I like to learn stuff and I like to help people.  So many people are using them, and making threads like this one, that I want to build up some first hand experience and learn the issues in depth.


  • jdupton likes this

#11 jdupton

jdupton

    Vanguard

  • *****
  • Posts: 2,452
  • Joined: 21 Nov 2010
  • Loc: Central Texas, USA

Posted 23 May 2020 - 09:34 PM

Burt,

 

   Back to your original posting and the images. I see nothing glaringly wrong with any of them other than the odd Flat frame appearance. It is not too unusual for the appearance of noise in the integration to be different than what you see in individual frames.

 

   The seemingly increased or clumpier noise in the close-up (third image) may only look worse than the single frame (image 3) after it. They differ in scale and that makes it harder to really judge. Also, if the integrated frame has had gradients removed using either ABE or DBE, then the noise will almost always look greater simply because you have removed some additional sky offset and light pollution.

 

   As the background is darkened for an image, the STF stretch becomes greater which makes the noise much more apparent even though it has not changed in magnitude. (Think about looking at 10 ADU of noise on top of a 500 ADU background. If you reduce the background to 200 ADU and the noise stays the same, it will stand out much more readily in the new image.)

 

   This effect can be very striking when you compare a raw image to an integration. Even though the integration will have a higher overall SNR, the background will be much more noticeable when compared to the raw image with the camera's offset still included when they are independently stretched using STF.

 

   In Post #4, Bob (@bobzeq25) offered to look at your data if you can upload sample frames to DropBox or a similar file sharing site. Take him up on it if you can. A quick look at the data will tell him more than the reduced images posted here.

 

 

John


  • bobzeq25 likes this

#12 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 24 May 2020 - 04:59 AM

Hey all,

 

Thanks so much for all the great replies. I really do appreciate it. There's a lot of good information to absorb so you'll have to excuse me if I'm not following on all the details. A lot of what you said will be fully digested, researched, and brought back to the table once I can speak to it better. In the meantime, I've attempted to address individual feedback below.

 

 

You can skip bias for OSC.

 

Gain increase will add noise to your subs, use it judiciously.

 

The flat is underexposed - you should see white/light gray center, dust motes, and vignetting in corners/edges.

I keep hearing others say to skip bias, but to add flat darks. Would you agree with that statement? I was also able to get in contact with someone from ZWO and they mentioned that the Gain 100 was the sweet spot so to speak. I guess the 2600 has a drop in noise at 100 vs an increase. Good to know I guess.

 

Burt (thelosttrek),

 

   I have two quick observations.

 

   The first is that the Red color in your first (un-linked STF) image is not due to anything you did in processing. It comes from having a weak Red content in your Flat Light source. It is of no consequence and will be naturally removed as your move on through your processing. It will be removed if you do any of the following processes while still linear -- Background Neutralization, Dynamic Background Extraction, or Color Calibration.

 

   I see an oddity in your Flat frame. In addition to looking strange for a Flat frame, it is in color. The Flat frame should be in raw FITs format when it is used. Maybe the flats were captured differently from the Lights, Darks, and Bias. You should check that.

 

 

John

Hey John, I'm not sure what it means to have weak red content in my flat light source. I'm using the Pegasus Flatmaster, which I assume has a good balance of light. As for my flat frame, I took another set of images that look a lot better. I was underexposing quite a bit before. See my new flat below.

 

Most importantly, you're not doing badly, at all.

 

IMPORTANT.  Do _not_ omit bias, an alternative is dark flats.  Without one of those flat correction does not work properly.   The underlying reason is that flats are at a very different level, and you divide by them, not subtract.

 

You'll see a lot of strange ideas on the Internet.  Short posts are not the best tools for understanding.  The antidote is good books.  I highly recommend this.

 

https://www.amazon.c.../dp/0999470906/

 

I agree with jupton.  Flats should not be "color" (debayered).  It can help to do the calibration process in PI manually before using BPP.  So you'll know what's going on.  Another book, _extremely_ useful.  Shows the workflow, explains why.

 

https://www.amazon.c...5/dp/3319976885

 

One shot color cameras have some noise.  Uncooled OSC have more.  80 minutes is not a lot of total imaging time, which is your main tool for reducing noise.  Later, you'll be trying out "dithering", another tool.  No need for the complication right now.

 

It's hard to tell anything from Cloudy Night jpgs.  If you upload one of each type of frame direct from the camera (unstretched, untouched <smile> ) to something like Dropbox and PM me, I'll take a look at them.

 

Below is a stack from my site, outside a big city.  Ignore the fact that your background is red and mine blue, it's completely irrelevant.  That's light pollution.  Then the stack after one pass of AutomaticBackground Extraction.  You _must_ change the default correction of "none" to subtraction.

 

Magic.  <grin>  Unlinking the STF is a bandaid, which does nothing at all to your data.  ABE actually reduces the effects of light pollution (some).

 

While you're getting your feet on the ground, it can be useful to image something simple, like a star cluster.  Easier to diagnose issues.  There will be issues.  <smile>

 

attachicon.gifABE exampl before.jpg

 

attachicon.gifABE example after.jpg

Thank you, Bob. I appreciate all the information. Ironically, I have the books The Deep Sky Imaging Primer, Astrophotography, The Astrophotography Manual, and Inside Pixinsight. I still have to read all of them. :)

 

I agree 80mins is not a lot. I just wanted to get my feet wet for the first time and 80mins was all I could get before wanting to test out my data. I will definitely be collecting a lot more time in the future.

 

I sent you a PM and appreciate your help there.

 

Strong color casts in the background, like your red or Bob's blue are not in any way related to light pollution - unless that's what your sky looks like when you look at it naked eye.

 

I also disagree with John that it's a flat fielding artifact.  I would bet that it's happening when the images are integrated.  As part of the integration, PixInsight will need to normalize all of your images so that their background, signal and noise levels are comparable.  What I see happen fairly often is that one of the color channels normalizes to a different background level than the others, and it takes very little difference to have a dramatic effect.  Here are the possible cases:

 

  • If the blue background is a little higher, you get a strong blue cast.
  • If the red background is a little higher, you get a strong red cast.
  • If the green background is a little higher, you get a strong green cast.
  • If the blue background is a little weaker, you get a strong yellow cast.
  • If the red background is a little weaker, you get a strong cyan cast.
  • If the green background is a little weaker, you get a strong magenta cast.

 

You can see this for yourself by splitting the channels into red, green and blue and then running the Statistics process on each of them.  Look at the median pixel value for the image and compare them.  You'll see how they map to the color casts that I described above.

 

The difference between linking and unlinking the channels in the STF autostretch is whether STF applies an identical stretch to all channels (where you see the color cast), or whether it independently calculates what it thinks is the right stretch for each channel individually (where you will see little or no color cast).

 

If you want to prevent these casts, it is possible to configure ImageIntegration to not do the normalization, but this is not a good idea because it reduces the ability of integration to detect outlier (noisy) pixels.  The color cast is a product of the math used to integrate the images and is completely harmless.  It can be fixed with ABE, DBE, ColorCalibration or even manually with PixelMath.

 

True light pollution will appear as large gradients, subtle color casts, but most often it just acts like fog and destroys contrast in the faint details while letting high contrast details (like stars) show up fine.

Thanks for the feedback, Wade. I'll have to dive a little deeper on looking at the different channels. After my second attempt at processing with the new flat I can definitely see the gradient from light pollution. See my image below.

 

In addition to everything else suggested, I doubt you really need 5 min subs unless you are imaging from a very dark site. I would suspect that 2-3 min subs are more than sufficient to swamp read noise given your sky conditions, equipment and target.

Total integration time trumps everything so that’s what you want to max out. 80 mins is not a lot of time. Depending on your LP you’ll want 4—6 hrs min. 6-10 hrs would be even better

Thanks for this advice. I will definitely look at shorter exposure times. I was reading an article about the effects that exposure time has on something called dark current and an increase in offset the longer you expose. I don't fully understand that yet, but visually, I could see the impact that longer exposures might have.

 

Wade & Burt,

 

   This is very easy to test.

 

   Take one frame after calibration and DeBayer it. Look for the color cast. If it is not in the single frame, then Wade is correct and it may be an artifact of Image Integration. However, if you take that single DeBayered frame and find the color cast, it came from Image Calibration and not integration (since it has not yet been integrated).

 

   Prior to Calibration and DeBayering, the Light frame image and the Flat frame image each have a certain mean. You can look at this for a raw uncalibrated frame with the Statistics process. Look at the mean of all raw pixels. Do the same for a flat. The Flat frame histogram will likely show multiple peaks. In Burt's example, I would expect the Red channel to be the lower ADU peak. We cannot see which colors correspond to which peaks until after DeBayering but it is worth just looking at the histograms and statistics for each frame type.

 

   As an explanatory example, if we assume some flat has a Mean of 30000 ADU with peaks at 33000 ADU for Blue and Green and 24000 ADU for Red, the weaker channel, then we can see what happens in Image Calibration with the flat. The Light frame will have the three color channels adjusted by division. (This all happens before DeBayering.)

  • Red CFA Pixels = Light / (24000 / 30000) = Light /  0.8 = Light * 1.25
  • Green CFA Pixels = Light / (33000 / 30000) = Light /  1.1 = Light * 0.91
  • Blue CFA Pixels = Light / (33000 / 30000)= Light /  1.1 = Light * 0.91

   As can be seen, if the Light started out neutral gray in color (all pixels in the background having roughly equal ADU values), then the calibrated Red pixels become brighter and the Green and blue Pixels become Darker in the calibrated Light when compared to the uncalibrated original Light. When we DeBayer, the Calibrated Light then shows the Red overall cast due to the Flat Calibration with the weaker Red channel. The cast comes from the spectral makeup of the Flat lighting source and not the sky or other source.

 

   This is always true. Weaker channels in the Flat will induce that color cast into the Calibrated, DeBayered Light while stronger channels in the Flat become weaker in the Calibrated, DeBayered Light. This all goes away once you have done your gradient removal and color calibration.

 

 

John

Thanks again, John. There's a lot of numbers and definitions I'm not totally familiar with here. I'll need some time to digest. However, I keep seeing the term ADU. What does that mean? Also, I did debayer a single frame and it was green not red. I'll note that I didn't calibrate as I'm still somewhat new to Pixinsight. I'll try and run a calibration on it. 

 

This is a good test to distinguish between the two cases.

 

Since I rarely shoot with a one-shot-color camera, I don't normally work with a master flat that has a Bayer matrix (which is what causes the multiple histogram peaks for color cameras).  I do frequently see the distinctive cast that is either a primary color or the complementary of a primary color in my own integrated images (taken with a mono camera and filters).

 

I keep meaning to pick up a OSC camera, so that I can be much more familiar with them and their issues, since so many people are using them...

So here is the result of the single image uncalibrated. It turned green.

single light frame debayer

 

Burt,

 

   Back to your original posting and the images. I see nothing glaringly wrong with any of them other than the odd Flat frame appearance. It is not too unusual for the appearance of noise in the integration to be different than what you see in individual frames.

 

   The seemingly increased or clumpier noise in the close-up (third image) may only look worse than the single frame (image 3) after it. They differ in scale and that makes it harder to really judge. Also, if the integrated frame has had gradients removed using either ABE or DBE, then the noise will almost always look greater simply because you have removed some additional sky offset and light pollution.

 

   As the background is darkened for an image, the STF stretch becomes greater which makes the noise much more apparent even though it has not changed in magnitude. (Think about looking at 10 ADU of noise on top of a 500 ADU background. If you reduce the background to 200 ADU and the noise stays the same, it will stand out much more readily in the new image.)

 

   This effect can be very striking when you compare a raw image to an integration. Even though the integration will have a higher overall SNR, the background will be much more noticeable when compared to the raw image with the camera's offset still included when they are independently stretched using STF.

 

   In Post #4, Bob (@bobzeq25) offered to look at your data if you can upload sample frames to DropBox or a similar file sharing site. Take him up on it if you can. A quick look at the data will tell him more than the reduced images posted here.

 

 

John

Thanks, John. I think I need to run through a processing session manually instead of running the batch process. It's really difficult to speak to all the comments without becoming more familiar with Pixinsight. I will definitely be digging into that app.

 

Here's my updated flat image. The exposure time is was APT suggested when using their Flat Aid tool. I still think it looks a little dark. Thoughts?

single flat frame V2
 
Here's a newly processed image using batch processing and the new flat frames. It's looking a little better, but the noise and light pollution in the lower left is noticeable. 
2nd process channels unlinked

 

Thank you, all!



#13 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 24 May 2020 - 12:37 PM

Not quoting the whole thing, but you'll figure it out.

 

People argue pointlessly about bias versus dark flats.  One is not magically superior universally.  Which you use depends on the camera.  Find someone with your camera who can tell you which to use.  If you can't dark flats works on most cameras, just a little more complicated than bias.

 

That light pollution gradient is removed with ABE or/and DBE.  Like many things in PI, a skill to be learned, and an important one.  Sometimes using both tools in sequence or one applied twice, works better.  For a start.

 

Use ABE.  Be sure to change the correction from "none" to subtraction.  Changing the interpolation factor is a tweak, often the default "4" is not quite the best, "1" may be a better place to start.

 

DBE can be better.  Like many things in PI, better comes only with acquiring the skill to tweak the settings.  ABE can often be as effective or more, and is _way_ easier.

 

I was taught how to use DBE by a serious PI expert.    It took him maybe a half an hour to explain the ins and outs, then some hours of practice by me.  It can take me a while to optimize it on specific data.

 

Ignore the background color.  And the discussion about it by experienced imagers here.  <grin>  ABE will take all colors out equally, and automatically.  No adjustment in settings needed for that.


Edited by bobzeq25, 24 May 2020 - 12:43 PM.


#14 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 25 May 2020 - 02:36 PM

Thanks for that bit of info, Bob. I should have noted earlier to your comment about dithering. I did apply dithering to these shots. I believe I used the default settings brought in from PHD2 into APT.

 

A couple of follow up questions about calibration frames in general. Now that I'm using a cooled camera is there a need for me to take new darks after each session? It would seem since I can control the temp./time that a single dark library would work for other sessions. If I'm understanding other posts about darks, this might be common practice until you notice a change in images. Can you comment on that?

 

Assuming the above is correct, would a library of 20 darks be sufficient?

 

The same question applies to flats. Can I continue to use the same flats assuming I don't see any new changes to dust on the sensor? 

 

I did find someone with my camera who recommended using dark flats. So it looks like I would take those at the same exposure length as the flat frame with scope covered, correct?

 

Final question about flats. I'm using APT Flat Aid to help with exposure time for my flats. It asks for the ADU number, which I was told by a ZWO person is between 24-28k. Or if I'm looking at the histogram, about 1/2 way up the graph. I'm trying to understand the correlation between ADU and flats. I've had some people tell me to set my ADU as low as 8000. I understand ADU as being a unit of measurement after the photon is converted through the analog to digital converter, but how this number works with calibration is confusing. 



#15 schmeah

schmeah

    Fly Me to the Moon

  • *****
  • Posts: 5,703
  • Joined: 26 Jul 2005
  • Loc: Morristown, NJ

Posted 25 May 2020 - 03:05 PM

Yes. dark libraries can be used over and over again. I have used the same set of darks for over six months. And so long as you don’t change your optical configuration or rotate your camera (even in the slightest) you can similarly re use your flats. Some retake flats almost every session. I never saw the need for this, as shifting dust is more a theoretical than a practical concern. I’ve use a set of flats for several months, until I swap scopes typically. Flats are best taken with the peak ADU somewhere between 40 and 50% of your max value and so will differ from camera to camera. It doesn’t need to be nearly perfect. I usually shoot for around 27K for my CCD, but above or below by 5K has never affected my results. I wouldn’t worry about the ADU/analog/digital question. You really don’t need to understand it. I would explain it, but I have no clue. Time is better spent on practical applications. There is more than enough to learn in this hobby without worrying about the theory behind it.

 

Derek

 

Derek



#16 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 25 May 2020 - 03:48 PM

Thanks for the reply, Derek. The full well size for the 2600 states 50ke. Would that be my peak value? If so, I assume half of that, when you say 40-50%, is of the full well size, which would be roughly 25K for me. Does that sound correct?

 

I definitely don't want to dive into the science behind all of it, but really just want to understand what the number means when I look at my histogram. I don't know how familiar you are with APT, but the Flat Aid shows this. And the histogram shows 3 curves which I assume is RGB? What am I looking for in that curve and what adjustments do I make to the ADU value to see that histogram view correctly?

 

Apt histogram


#17 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 25 May 2020 - 05:36 PM

Thanks for the reply, Derek. The full well size for the 2600 states 50ke. Would that be my peak value? If so, I assume half of that, when you say 40-50%, is of the full well size, which would be roughly 25K for me. Does that sound correct?

 

I definitely don't want to dive into the science behind all of it, but really just want to understand what the number means when I look at my histogram. I don't know how familiar you are with APT, but the Flat Aid shows this. And the histogram shows 3 curves which I assume is RGB? What am I looking for in that curve and what adjustments do I make to the ADU value to see that histogram view correctly?

 

The idea is to stay away from the edges of the histogram, where the sensor can be non-linear in response.  What you've posted (usual for a one shot color camera, which records 3 channels) is likely "good enough", I'd expose a bit more to move the lefthand peak a bit farther away.

 

I just use the histogram.  I also reuse bias and darks, and retake flats, since I frequently change out things like cameras, or rotate the camera for more pleasing framing.


Edited by bobzeq25, 25 May 2020 - 05:38 PM.

  • thelosttrek likes this

#18 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 25 May 2020 - 06:28 PM

Hey Bob, if I introduce a light pollution filter, would I take all of my calibration frames with that filter in place? I'm looking at the Optolong L-eNhance Light Pollution Dual-Bandpass to use with my OSC.


Edited by thelosttrek, 25 May 2020 - 06:29 PM.


#19 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 25 May 2020 - 06:35 PM

Hey Bob, if I introduce a light pollution filter, would I take all of my calibration frames with that filter in place? I'm looking at the Optolong L-eNhance Light Pollution Dual-Bandpass to use with my OSC.

You take your flats with it.  Bias (or flat darks) and darks have the optics covered, so they don't care.

 

Note that that filter is basically designed to be used on emission (includes planetary) nebulae, only.  It may (I think it's likely) cause more harm than good if used on things like galaxies.  Complicated subject, here are two good threads.

 

https://www.cloudyni...-the-answer-is/

 

https://www.cloudyni...-be-a-bad-idea/

 

Good article about the filter.  Note that all the targets are emission nebulae.  That's not by chance.  <smile>

 

https://astrobackyar...enhance-filter/


Edited by bobzeq25, 25 May 2020 - 06:41 PM.

  • thelosttrek likes this

#20 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 25 May 2020 - 07:24 PM

 

Note that that filter is basically designed to be used on emission (includes planetary) nebulae, only.  It may (I think it's likely) cause more harm than good if used on things like galaxies.  Complicated subject, here are two good threads.

Thank you for mentioning that. I'll look into the different filter types for specific targets. 



#21 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 25 May 2020 - 08:25 PM

Thank you for mentioning that. I'll look into the different filter types for specific targets. 

Good to do that research.  I think you'll find it hard to find one that works at all well on galaxies or clusters.  They emit light over the entire spectrum, so called "light pollution filters" just whack out parts of that.  Since emission nebulae mostly emit in certain parts of the spectrum, the filters can work well on those.  Galaxies and clusters - not so much.  <smile>  Details in the threads I referenced in #19.



#22 schmeah

schmeah

    Fly Me to the Moon

  • *****
  • Posts: 5,703
  • Joined: 26 Jul 2005
  • Loc: Morristown, NJ

Posted 25 May 2020 - 10:31 PM

Thanks for the reply, Derek. The full well size for the 2600 states 50ke. Would that be my peak value? If so, I assume half of that, when you say 40-50%, is of the full well size, which would be roughly 25K for me. Does that sound correct?

 

I definitely don't want to dive into the science behind all of it, but really just want to understand what the number means when I look at my histogram. I don't know how familiar you are with APT, but the Flat Aid shows this. And the histogram shows 3 curves which I assume is RGB? What am I looking for in that curve and what adjustments do I make to the ADU value to see that histogram view correctly?

 

No, your full well capacity is the amount of charge a pixel can hold before saturating. This is not the same as your maximum ADU value. This value is 65553 for 16 bit cameras like mine. I think yours is 16 bit as well, so somewhere in the 25-32 range would be reasonable. I’ve used the APT flats wizard once or twice. Seemed to work well. I’m not familiar with the RGB histogram, not clear why the peaks seem so discordant since your shooting for a similar ADU values. You make want to reduce the max and min exposures that APT will accept. You don’t want to shoot 20 second flats. Then you will need matching flat darks as opposed to just bias. Shooting very short exposures is fine (fractions of a second) so long as you don’t have a mechanical shutter. 
 

Derek



#23 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 26 May 2020 - 04:45 PM

OK, I looked at your data, have some comments.  It would help if I had a clue about what your light pollution level is.  Look on this map, click on your approximate location, and tell me what your mag per arc sec squared is.  One decimal place is plenty.  I'm guessing 18.5-19.0?

 

https://www.lightpol...FFFFFTFFFFFFFFF

 

I also need to know if you're using the 80 at the native F6, or reduced to something faster.


Edited by bobzeq25, 26 May 2020 - 04:47 PM.


#24 thelosttrek

thelosttrek

    Explorer 1

  • -----
  • topic starter
  • Posts: 52
  • Joined: 10 Nov 2019
  • Loc: San Diego, CA

Posted 29 May 2020 - 12:17 AM

Thanks for reply about the ADU value, Derek. It's easier to understand now that I know it's associated with the 16 bit ADC of the camera.

 

Bob, to answer your questions, I'm using the SV80 at its native F6 with no reducer. My light pollution is based on the overlay World Atlas 2015, which I assume is outdated, but here are the results.

 

SQM 19.78 mag./arc sec2
Brightness 1.32 mcd/m2
Artif. bright. 1150 μcd/m2
Ratio 6.73
Bortle class 5
Elevation 200 meters



#25 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 19,820
  • Joined: 27 Oct 2014

Posted 30 May 2020 - 12:27 AM

Sorry for the delay.  Comments, most important to least.

 

You're doing extremely well for a beginner.

 

The flat (I take it the first you did) you uploaded to Dropbox was darn near perfectly exposed.  The ADU (analog to digital units) ranged from 14000 to 47000.  The min/max ADU are 0/65000.  You're well away from the edges, which is what you want.  The average was 25000, "close enough" to the middle.

 

The light was overexposed.  You "got away" with that (mostly) because your camera has extraordinary dynamic range (13.5 "stops" at gain 100), and, there are few bright stars in the frame.  But I'd certainly cut your exposure _at least_ to 120 seconds. 

 

Bias corrected, it was 1900 ADU (or 475 electrons), which is quite high for such a low read noise camera, and caused some stars to saturate.  Often that's unavoidable, but you'd like to limit it.  Once the star saturates it loses all color.  Highlights of galaxies or nebulae that saturate lose detail. 

 

It's difficult for me to get exact about a recommendation, when that camera has both unusually low read noise and high full well capacity. I'm not sure my usual rules of thumb would apply.   I think 30-60 seconds would be optimal, and I'd guess more like 30.  I'd want to look at subs with more range of star brightness to check how many saturate, and be more precise.  But I'd want the corrected ADU to be no more than 500, with such a low read noise camera.

 

Check out this youtube about imaging with CMOS.  An expert explains why low read noise CMOS cameras should use shorter subs than many think.

 

https://www.youtube....h?v=3RH93UvP358

 

Your star full width at half maximum was about 3.2 arc sec, quite decent.  The eccentricity was 0.7, you'd like it closer to 0.5.  That's largely the result of field curvature due to no flattener, the stars toward the edges of the APS-C chip are pretty oval.

 

The noise is certainly higher because of no cooling.   Your lights are riddled with hot pixels, somewhat taken out by darks and stacking.  But starting from a better place is better.  The color cast has been explained, ABE will take it right out.  More work with ABE or DBE will reduce the light pollution gradient.

 

But again, excellent start, and your first flat exposure was fine.  I might tweak it plus 25%, but it's a tweak.  The lights, however, are seriously overexposed.The $2000 for that camera was money well spent.


Edited by bobzeq25, 30 May 2020 - 12:35 AM.



CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: astrophotography, beginner, imaging



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics