Thanks so much for all the great replies. I really do appreciate it. There's a lot of good information to absorb so you'll have to excuse me if I'm not following on all the details. A lot of what you said will be fully digested, researched, and brought back to the table once I can speak to it better. In the meantime, I've attempted to address individual feedback below.
You can skip bias for OSC.
Gain increase will add noise to your subs, use it judiciously.
The flat is underexposed - you should see white/light gray center, dust motes, and vignetting in corners/edges.
I keep hearing others say to skip bias, but to add flat darks. Would you agree with that statement? I was also able to get in contact with someone from ZWO and they mentioned that the Gain 100 was the sweet spot so to speak. I guess the 2600 has a drop in noise at 100 vs an increase. Good to know I guess.
I have two quick observations.
The first is that the Red color in your first (un-linked STF) image is not due to anything you did in processing. It comes from having a weak Red content in your Flat Light source. It is of no consequence and will be naturally removed as your move on through your processing. It will be removed if you do any of the following processes while still linear -- Background Neutralization, Dynamic Background Extraction, or Color Calibration.
I see an oddity in your Flat frame. In addition to looking strange for a Flat frame, it is in color. The Flat frame should be in raw FITs format when it is used. Maybe the flats were captured differently from the Lights, Darks, and Bias. You should check that.
Hey John, I'm not sure what it means to have weak red content in my flat light source. I'm using the Pegasus Flatmaster, which I assume has a good balance of light. As for my flat frame, I took another set of images that look a lot better. I was underexposing quite a bit before. See my new flat below.
Most importantly, you're not doing badly, at all.
IMPORTANT. Do _not_ omit bias, an alternative is dark flats. Without one of those flat correction does not work properly. The underlying reason is that flats are at a very different level, and you divide by them, not subtract.
You'll see a lot of strange ideas on the Internet. Short posts are not the best tools for understanding. The antidote is good books. I highly recommend this.
I agree with jupton. Flats should not be "color" (debayered). It can help to do the calibration process in PI manually before using BPP. So you'll know what's going on. Another book, _extremely_ useful. Shows the workflow, explains why.
One shot color cameras have some noise. Uncooled OSC have more. 80 minutes is not a lot of total imaging time, which is your main tool for reducing noise. Later, you'll be trying out "dithering", another tool. No need for the complication right now.
It's hard to tell anything from Cloudy Night jpgs. If you upload one of each type of frame direct from the camera (unstretched, untouched <smile> ) to something like Dropbox and PM me, I'll take a look at them.
Below is a stack from my site, outside a big city. Ignore the fact that your background is red and mine blue, it's completely irrelevant. That's light pollution. Then the stack after one pass of AutomaticBackground Extraction. You _must_ change the default correction of "none" to subtraction.
Magic. <grin> Unlinking the STF is a bandaid, which does nothing at all to your data. ABE actually reduces the effects of light pollution (some).
While you're getting your feet on the ground, it can be useful to image something simple, like a star cluster. Easier to diagnose issues. There will be issues. <smile>
ABE exampl before.jpg
ABE example after.jpg
Thank you, Bob. I appreciate all the information. Ironically, I have the books The Deep Sky Imaging Primer, Astrophotography, The Astrophotography Manual, and Inside Pixinsight. I still have to read all of them.
I agree 80mins is not a lot. I just wanted to get my feet wet for the first time and 80mins was all I could get before wanting to test out my data. I will definitely be collecting a lot more time in the future.
I sent you a PM and appreciate your help there.
Strong color casts in the background, like your red or Bob's blue are not in any way related to light pollution - unless that's what your sky looks like when you look at it naked eye.
I also disagree with John that it's a flat fielding artifact. I would bet that it's happening when the images are integrated. As part of the integration, PixInsight will need to normalize all of your images so that their background, signal and noise levels are comparable. What I see happen fairly often is that one of the color channels normalizes to a different background level than the others, and it takes very little difference to have a dramatic effect. Here are the possible cases:
- If the blue background is a little higher, you get a strong blue cast.
- If the red background is a little higher, you get a strong red cast.
- If the green background is a little higher, you get a strong green cast.
- If the blue background is a little weaker, you get a strong yellow cast.
- If the red background is a little weaker, you get a strong cyan cast.
- If the green background is a little weaker, you get a strong magenta cast.
You can see this for yourself by splitting the channels into red, green and blue and then running the Statistics process on each of them. Look at the median pixel value for the image and compare them. You'll see how they map to the color casts that I described above.
The difference between linking and unlinking the channels in the STF autostretch is whether STF applies an identical stretch to all channels (where you see the color cast), or whether it independently calculates what it thinks is the right stretch for each channel individually (where you will see little or no color cast).
If you want to prevent these casts, it is possible to configure ImageIntegration to not do the normalization, but this is not a good idea because it reduces the ability of integration to detect outlier (noisy) pixels. The color cast is a product of the math used to integrate the images and is completely harmless. It can be fixed with ABE, DBE, ColorCalibration or even manually with PixelMath.
True light pollution will appear as large gradients, subtle color casts, but most often it just acts like fog and destroys contrast in the faint details while letting high contrast details (like stars) show up fine.
Thanks for the feedback, Wade. I'll have to dive a little deeper on looking at the different channels. After my second attempt at processing with the new flat I can definitely see the gradient from light pollution. See my image below.
In addition to everything else suggested, I doubt you really need 5 min subs unless you are imaging from a very dark site. I would suspect that 2-3 min subs are more than sufficient to swamp read noise given your sky conditions, equipment and target.
Total integration time trumps everything so that’s what you want to max out. 80 mins is not a lot of time. Depending on your LP you’ll want 4—6 hrs min. 6-10 hrs would be even better
Thanks for this advice. I will definitely look at shorter exposure times. I was reading an article about the effects that exposure time has on something called dark current and an increase in offset the longer you expose. I don't fully understand that yet, but visually, I could see the impact that longer exposures might have.
Wade & Burt,
This is very easy to test.
Take one frame after calibration and DeBayer it. Look for the color cast. If it is not in the single frame, then Wade is correct and it may be an artifact of Image Integration. However, if you take that single DeBayered frame and find the color cast, it came from Image Calibration and not integration (since it has not yet been integrated).
Prior to Calibration and DeBayering, the Light frame image and the Flat frame image each have a certain mean. You can look at this for a raw uncalibrated frame with the Statistics process. Look at the mean of all raw pixels. Do the same for a flat. The Flat frame histogram will likely show multiple peaks. In Burt's example, I would expect the Red channel to be the lower ADU peak. We cannot see which colors correspond to which peaks until after DeBayering but it is worth just looking at the histograms and statistics for each frame type.
As an explanatory example, if we assume some flat has a Mean of 30000 ADU with peaks at 33000 ADU for Blue and Green and 24000 ADU for Red, the weaker channel, then we can see what happens in Image Calibration with the flat. The Light frame will have the three color channels adjusted by division. (This all happens before DeBayering.)
- Red CFA Pixels = Light / (24000 / 30000) = Light / 0.8 = Light * 1.25
- Green CFA Pixels = Light / (33000 / 30000) = Light / 1.1 = Light * 0.91
- Blue CFA Pixels = Light / (33000 / 30000)= Light / 1.1 = Light * 0.91
As can be seen, if the Light started out neutral gray in color (all pixels in the background having roughly equal ADU values), then the calibrated Red pixels become brighter and the Green and blue Pixels become Darker in the calibrated Light when compared to the uncalibrated original Light. When we DeBayer, the Calibrated Light then shows the Red overall cast due to the Flat Calibration with the weaker Red channel. The cast comes from the spectral makeup of the Flat lighting source and not the sky or other source.
This is always true. Weaker channels in the Flat will induce that color cast into the Calibrated, DeBayered Light while stronger channels in the Flat become weaker in the Calibrated, DeBayered Light. This all goes away once you have done your gradient removal and color calibration.
Thanks again, John. There's a lot of numbers and definitions I'm not totally familiar with here. I'll need some time to digest. However, I keep seeing the term ADU. What does that mean? Also, I did debayer a single frame and it was green not red. I'll note that I didn't calibrate as I'm still somewhat new to Pixinsight. I'll try and run a calibration on it.
This is a good test to distinguish between the two cases.
Since I rarely shoot with a one-shot-color camera, I don't normally work with a master flat that has a Bayer matrix (which is what causes the multiple histogram peaks for color cameras). I do frequently see the distinctive cast that is either a primary color or the complementary of a primary color in my own integrated images (taken with a mono camera and filters).
I keep meaning to pick up a OSC camera, so that I can be much more familiar with them and their issues, since so many people are using them...
So here is the result of the single image uncalibrated. It turned green.
Back to your original posting and the images. I see nothing glaringly wrong with any of them other than the odd Flat frame appearance. It is not too unusual for the appearance of noise in the integration to be different than what you see in individual frames.
The seemingly increased or clumpier noise in the close-up (third image) may only look worse than the single frame (image 3) after it. They differ in scale and that makes it harder to really judge. Also, if the integrated frame has had gradients removed using either ABE or DBE, then the noise will almost always look greater simply because you have removed some additional sky offset and light pollution.
As the background is darkened for an image, the STF stretch becomes greater which makes the noise much more apparent even though it has not changed in magnitude. (Think about looking at 10 ADU of noise on top of a 500 ADU background. If you reduce the background to 200 ADU and the noise stays the same, it will stand out much more readily in the new image.)
This effect can be very striking when you compare a raw image to an integration. Even though the integration will have a higher overall SNR, the background will be much more noticeable when compared to the raw image with the camera's offset still included when they are independently stretched using STF.
In Post #4, Bob (@bobzeq25) offered to look at your data if you can upload sample frames to DropBox or a similar file sharing site. Take him up on it if you can. A quick look at the data will tell him more than the reduced images posted here.
Thanks, John. I think I need to run through a processing session manually instead of running the batch process. It's really difficult to speak to all the comments without becoming more familiar with Pixinsight. I will definitely be digging into that app.
Here's my updated flat image. The exposure time is was APT suggested when using their Flat Aid tool. I still think it looks a little dark. Thoughts?
Here's a newly processed image using batch processing and the new flat frames. It's looking a little better, but the noise and light pollution in the lower left is noticeable.