Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

2600MC and 120 second subs. too long ?

  • Please log in to reply
119 replies to this topic

#51 idclimber

idclimber

    Cosmos

  • *****
  • Posts: 7,806
  • Joined: 08 Apr 2016
  • Loc: McCall Idaho

Posted 21 September 2023 - 11:22 AM

Wade, I understand the logic, but I do suspect the longer integration time of gain 0 does have more data. I guess the question you are proposing which option is better. My intuition tells me the following would be nearly equivalent.

 

Gain 0 - 240 subs at 120" (4 hrs) of luminance 

Gain 100 - 480 subs at 60" (same 4 hrs of data)

 

Intuitively I do not believe that 2 hours of gain 100 will equal the 4 at gain 0. Happy to be proven wrong. 


  • Spaceman 56 likes this

#52 Andros246

Andros246

    Surveyor 1

  • -----
  • Posts: 1,775
  • Joined: 24 Oct 2022

Posted 21 September 2023 - 12:26 PM

Does anyone have a stack (same integration) with gain 0  vs gain 100?

 

I'd put money on them being pretty identical or marginally different.


Edited by Andros246, 21 September 2023 - 12:27 PM.

  • Spaceman 56 likes this

#53 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 12:36 PM

Wade, I understand the logic, but I do suspect the longer integration time of gain 0 does have more data. I guess the question you are proposing which option is better. My intuition tells me the following would be nearly equivalent.

Gain 0 does have more data.  Since the gain 0 exposure is longer, the sensor has literally received more photons.

 

Where it gets confusing is that gain 0 also has more read noise.  This read noise is fixed, independent of exposure length.  And it wipes out the faintest details.  To pull the faintest details out of that read noise, you need to expose longer at gain 0.

 

When you use gain 100, you don't have to expose at long to get the faint details.  And the shorter exposures tend to reduce the tendency of bright stars to saturate.

 

When you do the math to calculate the dynamic range, it turns out that when you account for read noise, a shorter exposure at gain 100 is nearly an identical equivalent of a longer exposure at gain 0.  The pixel values in the longer exposure will be higher, but when you account for the additional read noise, you could do a division operation on the longer exposures to make the pixel values equivalent to the shorter exposure.  After doing that, the two exposures will be nearly the same in every way.  You would not be losing faint details in the division operation, only noise.

 

Gain 0 - 240 subs at 120" (4 hrs) of luminance 
Gain 100 - 480 subs at 60" (same 4 hrs of data)
 
Intuitively I do not believe that 2 hours of gain 100 will equal the 4 at gain 0. Happy to be proven wrong.

I am not saying that 2 hours of gain 100 is equal to 4 hours at gain 0.  I've not bothered to calculate the equivalence.

 

am saying that if you compare identical exposure times at gain 0 and gain 100, the gain 100 subs will have higher S/N.  That should be quite easy to prove to yourself through experimentation.

 

The rest of my argument is qualitative, not quantitative.  Honestly, keeping it qualitative is 100% me being lazy.  I have some other stuff that I need to get done today, so have not looked at the math.  Other than the math on the ZWO chart data, my math has been done with hypothetical numbers, chosen to make it simple.

 

If people would like, I can take a look at doing a quantitative analysis to determine the actual exposure equivalency when I get a chance.  It would likely not be today, though (unless I have a breakthrough in my other task for the day)...


  • idclimber and Spaceman 56 like this

#54 calypsob

calypsob

    Cosmos

  • *****
  • Posts: 9,205
  • Joined: 20 Apr 2013
  • Loc: Virginia

Posted 21 September 2023 - 12:38 PM

This is 100% spot on.

If you look at the chart on page 7 of the ASI2600 manual, they conveniently included a chart for dynamic range versus gain. The sensor in the ASI2600 (all variants, including both OSC and mono) has two readout modes, and they switch at gain 100. If you look at the dynamic range chart, you will see that the number of stops at gain 100 is only about 1/10th of a stop less than gain 0. I consider this to be essentially identical. But gain 100 captures that dynamic range with less exposure time. What's not to like about that?

Dynamic range is not just about the well depth.

You also have to look at read noise. Remember that pixel values in the read noise range are essentially useless. So if you have a camera with 16 bits of well depth, but 2 bits of read noise, you only have 14 bits of useful dynamic range. So instead of 65536 (2^16) useful pixel values, you get 16384 (2^14) useful pixel values.

Looking at gain 0 versus gain 100 with the ASI2600, you get these numbers:

Gain 0 looks like 50,000 FWC*, which is 15.6 bits (2^15.6), with read noise of about 3.5, which is 1.8 bits (2^1.8). That means that you have 13.8 bits of usable range (15.6 total, minus 1.8 bits of read noise).

Gain 100 looks like about 19,000 FWC, which is 14.2 bits, with read noise of about 1.5, which is 0.6 bits. That means that you have 13.6 bits of usable range.

So the dynamic range difference between gain 0 and gain 100 is only 0.2 bits. But gain 100 gets its result with shorter exposure time.

For me, the answer is clear with this camera. Gain 100 is very much the sweet spot.

* By the way, I hate the term full well capacity used in this context. FWC is a hardware property of the sensor. Changing gain has no effect whatsoever on the sensor's FWC. That ZWO uses it to reflect the output from the ADC is both technically incorrect and confusing. They should just refer to dynamic range.


But if read noise is swamped at gain 0 or 100 you have the full dynamic range right? What happens to DR after you integrate 100 Rn swamped subs, dithered, calibrated, all cooled to 0c? Is gain 0 or gain 100 superior at that point?

#55 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 12:48 PM

But if read noise is swamped at gain 0 or 100 you have the full dynamic range right? What happens to DR after you integrate 100 Rn swamped subs, dithered, calibrated, all cooled to 0c? Is gain 0 or gain 100 superior at that point?

You are over thinking this.

 

In your example, the only variable is the S/N in the individual subs.  If the S/N in the gain 100 stack is better than the S/N in the gain 0 stack, then the gain 0 stack must be better, right?

 

It would be possible to do the math, but including all the factors would be higher than my pay grade.  But if the number of subs is the same in both stacks, and they are both dithered the same way, and both integrated the same way, then these things literally cancel out of the math, leaving S/N as the variable.  This makes the above conclusion both simple and correct.

 

I am guessing that your point is that the difference in the final result may be quite minor.  If so, I agree completely!  My point is that if you have two options that deliver essentially the same result, but one of them uses a shorter total exposure time, why would you not opt for that one?

 

I think that the main point of confusion here is the thing that I have repeated implied, but not stated outright:  The choice of exposure times per sub, and total integration time are all about signal-to-noise ratio, and nothing else.  This is assuming that your tracking/guiding is spot on and flexure is controlled.  The absolute value of your pixels is meaningless.  Only the S/N matters.  If you want confirmation of this, just look at PixInsight.  In PI, the maximum pixel value is 1 (yes, one), and the minimum is 0.  All of the image details are hidden in fractional pixel values.



#56 calypsob

calypsob

    Cosmos

  • *****
  • Posts: 9,205
  • Joined: 20 Apr 2013
  • Loc: Virginia

Posted 21 September 2023 - 01:01 PM

You are over thinking this.

In your example, the only variable is the S/N in the individual subs. If the S/N in the gain 100 stack is better than the S/N in the gain 0 stack, then the gain 0 stack must be better, right?

It would be possible to do the math, but including all the factors would be higher than my pay grade. But if the number of subs is the same in both stacks, and they are both dithered the same way, and both integrated the same way, then these things literally cancel out of the math, leaving S/N as the variable. This makes the above conclusion both simple and correct.

I am guessing that your point is that the difference in the final result may be quite minor. If so, I agree completely! My point is that if you have two options that deliver essentially the same result, but one of them uses a shorter total exposure time, why would you not opt for that one?


I left out a key detail. Lets assume you need a 900s sub at gain 0 to achieve optimal DR and snr but gain 100 requires a 420s sub to get the same snr. Is the dynamic range actually greater in the 900s gain 0 sub? Of course my next question would be how the 900s DR compares to DR in 2 x 480s gain 100 subs.

Im just thinking out loud. I really don't intend on doing tests but I experienced that processing gain 0 data felt alot different to process than gain 100 data. It was not clesner but there seemed to be more background signal. Im not exactly why but I have speculations.

Edited by calypsob, 21 September 2023 - 01:03 PM.


#57 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 01:31 PM

I left out a key detail. Lets assume you need a 900s sub at gain 0 to achieve optimal DR and snr but gain 100 requires a 420s sub to get the same snr. Is the dynamic range actually greater in the 900s gain 0 sub? Of course my next question would be how the 900s DR compares to DR in 2 x 480s gain 100 subs.

Dynamic range is independent of exposure time.  This is why ZWO can plot it in the chart in the manual.

 

Your first question is about dynamic range.  ZWO answers this directly in the chart that I keep referencing (page 7 in this link).  According to the chart, gain 0 has a dynamic range of 13.9 stops.  And according to the chart, gain 100 has a dynamic range of 13.8 stops.  So I would consider 13.9 stops at gain 0 and 13.8 stops at gain 100 to be the definitive and direct answer to your first question.

 

I did some math in post 48 to calculate the dynamic range at both gain levels.  My answers are a bit different from ZWO's.  I got 13.8 and 13.6 stops for gain 0 and 100, respectively.  ZWO's numbers are probably more accurate, since I had to interpolate values visually from the chart, and I rounded my estimates to exaggerate the difference between the two gain settings.

 

For your next question, you are also comparing dynamic range.  As such, this is not really a new question.  I'm trying to determine what you are really getting at with your second question, but I'm not I understand well enough to give an answer.



#58 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 01:45 PM

Im just thinking out loud. I really don't intend on doing tests but I experienced that processing gain 0 data felt alot different to process than gain 100 data. It was not clesner but there seemed to be more background signal. Im not exactly why but I have speculations.

Gain 0 data does have more background signal than gain 100.  It actually has more signal at every level.  This is because, if the gain 0 exposures are longer, more photons hit the sensor.  That part is not in dispute.

 

Where it gets confusing is that we really shouldn't be caring at all about raw signal.  What we are interested in is signal-to-noise ratio (S/N).  To consider S/N, we have to look at both signal and noise.  The interesting conclusions all happen because the read noise at gain 100 is much lower than the read noise at gain 0.  When you account for this, it makes gain 0 and gain 100 very similar in performance, but gain 100 requiring less exposure time to get to the same S/N as gain 0.

 

I'll also point out that this comparison is very different than what we used to see with CCD cameras.  This whole conversation is due to the behavior of the sensor where it has two different modes, with the noisier mode operating at gain 0 to gain 99, and the lower noise mode operating at gain 100 and larger.  You almost have to treat it as two different sensors.  The gain 0 sensor is a high read noise instrument with a big FWC.  The gain 100 sensor is a lower read noise instrument with a smaller FWC.  If we were cross shopping these as two different sensors, most of us would pick the lower read noise one (at least those of us who look at the whole story; people who look at FWC without considering read noise might choose the higher FWC one, thinking - incorrectly - that they were getting better dynamic range).



#59 dciobota

dciobota

    Soyuz

  • *****
  • Posts: 3,742
  • Joined: 07 Aug 2007

Posted 21 September 2023 - 03:31 PM

I think a false statement has been made here, that a 4 min exposure at gain 100 contains more data than a 5 min exposure at gain 0.
By the very definition of dynamic range, that is not true unless you are severely underexposing at 5 minutes. If your data is well exposed at 5 minutes at gain 0, then 4 minutes at gain 100 (which multiplies signal btw, not photons) will end up with severely clipped data. Sure, your read noise is lower so some of the fainter signal will make it into the "clean" range, but you will have lost signal at the upper end. It all depends on the target.
So it's not generally a true statement that you will gather more data in less time at gain 100. You have to adjust your exposure time in each case to nominally cover your desired range.

And this brings me to another point, calculating your exposures based on clipping stars, and using some arbitrary number of clipped pixels to say yeah, this is the sweet spot. That again may work with a number of targets, but fails where the dynamic range of the target is large.
Take m31, which has a huge dynamic range covering the core to the outer arms. If you just limit to clipping a few pixels on bright stars you'll manage to get the core well enough, but the outer arms will be at or below your read noise. What I do in those cases is let the core blow a little in order to let the faint data rise above the noise floor, then maybe take a set just for the core, same as m42. For other faint objects I do the same, I let the stars blow out a bit in order to capture the fainter detail. What I concentrate on is each sub showing some of the faint details, or at least some hint of it.
If you don't, you will eventually pull the faint details out, but your total exposure time will end up being much longer. This is the key to swamping the noise, allowing you to capture that detail without spending a lot of time collecting data.
Case in point. There was a thread a few years ago showing what you can do with 1 second exposures on a tripod. Pretty darn impressive until you see how much total exposure time was involved and the f ratio used. That's because the individual subs contained very little data above the read noise floor.

Edited by dciobota, 21 September 2023 - 03:34 PM.


#60 idclimber

idclimber

    Cosmos

  • *****
  • Posts: 7,806
  • Joined: 08 Apr 2016
  • Loc: McCall Idaho

Posted 21 September 2023 - 04:14 PM

Based on my brief testing of the clipping of the highlights shows the difference between gain 0 and 100 is very close to one full stop. So the 60" exposure goes up to about 120". If the difference was 4 to 5, I certainly would not have bothered. 



#61 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 04:27 PM

I think a false statement has been made here, that a 4 min exposure at gain 100 contains more data than a 5 min exposure at gain 0.

I haven't seen where anyone is saying that.  Certainly, I have been very specific - a few times now - that longer exposures collect more data, regardless of the gain.  Period.

 

What I am trying to explain is that signal-to-noise is (much) more important than the total amount of signal without consideration of noise.

 

If you offer me two sets of data with the following characteristics:

  • Set 1 has long exposures with higher read noise.
  • Set 2 has 1/10th the total to total exposure length and 1/10th the total signal, but so much less noise that the S/N is better than the first set.

I would take the second set of data every time.  It will lead to a better result.  This is something that seems difficult for people to grasp.

 

By the very definition of dynamic range, that is not true unless you are severely underexposing at 5 minutes. If your data is well exposed at 5 minutes at gain 0, then 4 minutes at gain 100 (which multiplies signal btw, not photons) will end up with severely clipped data. Sure, your read noise is lower so some of the fainter signal will make it into the "clean" range, but you will have lost signal at the upper end. It all depends on the target.

I have a few comments here.

 

Let's get the technical one out of the way here.  You are correct that it does not multiply photons - but it also doesn't directly multiply signal.  What it does do is amplify the analog electron voltage stored in the pixel before sending it to the analog-to-digital converter (ADC), very much like an audio preamplifier raises microphone level signals to something more usable.  This does effectively increase the signal, but it also introduces its own noise.  It also introduces rounding errors when the ADC has to emit an integer value.  This second effect is called "quantization error", and its impact is generally included in the read noise spec for the camera.

 

So it's not generally a true statement that you will gather more data in less time at gain 100. You have to adjust your exposure time in each case to nominally cover your desired range.

You will never collect more data in less time by changing the gain setting.  The amount of data that you collect over time is dependent on the amount of light your scope puts on the sensor, the sensor's quantum efficiency, and the exposure time.  Adjusting the gain only controls how the camera converts that data to a digital value.

 

It turns out that adjusting the gain does have some practical implications to the S/N.  If you don't buy my statement above that S/N is more important than just the signal itself, then nothing I am saying makes sense, and I cannot help you.  But seriously, S/N is the important thing.  Understanding that is key to getting the most out of your camera.
 



#62 Spaceman 56

Spaceman 56

    Fly Me to the Moon

  • *****
  • topic starter
  • Posts: 6,387
  • Joined: 02 Jan 2022
  • Loc: New Zealand

Posted 21 September 2023 - 04:34 PM

all this technical stuff is good.

 

could some of you experts please confirm Zambias finding, that the single sub at 120 seconds is not overexposed and clipping.

 

 https://drive.google...?usp=share_link

 

thanks Spaceman



#63 idclimber

idclimber

    Cosmos

  • *****
  • Posts: 7,806
  • Joined: 08 Apr 2016
  • Loc: McCall Idaho

Posted 21 September 2023 - 04:51 PM

I measure 748 pixels that are at the max ADU of 65535. I would drop the exposure to 90 or even 60. 



#64 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 05:01 PM

could some of you experts please confirm Zambias finding, that the single sub at 120 seconds is not overexposed and clipping.

It looks totally fine to me.

 

When I look at it unstretched, it is completely black, except for a handful of about a dozen tiny spots.  These are saturated star cores.  They are small enough and few enough, that they will not detract from the finished images at all.

 

I ran the PixInsight Statistics process on it and here are the numbers:

 

Tarrantulla_120_secs_016
            K
count (%)   100.00000
count (px)  26091648
mean        686.3
median      662.0
avgDev      89.8
MAD         66.7
minimum     439.0
maximum     65535.0

 

The results are normalized to a 16 bit data space, so the value of a saturated pixel is 65535.  The median pixel value is 662.0, which is around 1% of the total range.  This will actually need to be brightened in the final image, since you want the final background to be somewhere around 10% or so.  The average pixel value is just 24 ADU's above the background, so most of the interesting details are very, very faint.

 

To get an actual count of saturated pixels, we can convert only the saturated pixels to black (and all others to white) with this PixelMath expression:

 

iif($T == 1, 0, 1)

 

Finally, we can use the HistogramTransformation tool to actually count the black (formerly saturated) pixels.  To do this, just move the shadow slider in HT just a bit to the right.  For this image, we see that there are 748 saturated pixels.

 

This exposure could have been much, much longer before I would consider it "over exposed".


  • Spaceman 56 likes this

#65 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 05:07 PM

I measure 748 pixels that are at the max ADU of 65535. I would drop the exposure to 90 or even 60. 

No!

 

All of the saturated pixels are in the cores of bright stars.  It is fine for these pixels to be saturated.  By the time you process the image, you won't even notice them (I actually had to wipe the dust off my monitor to even identify them visually).  The halos around these same bright stars contain far more unsaturated pixels.  There is plenty of information there to get even the brightest stars colored correctly.

 

If you see saturated pixels in the object of interest, then you have a problem.  In my analysis in post 64, I noted that the rest of the values, outside these star cores, are actually quite low.  This exposure could have been much, much longer before being even remotely over exposed.  If anything, it's under exposed.

 

I'd point out here that this image has 26,091,648 pixels.  Would you really consider it a problem to have 0.003% saturated pixels to be a problem?  I reject far more pixels than than when I do CosmeticCorrection...


  • limeyx, Oort Cloud, Zambiadarkskies and 1 other like this

#66 idclimber

idclimber

    Cosmos

  • *****
  • Posts: 7,806
  • Joined: 08 Apr 2016
  • Loc: McCall Idaho

Posted 21 September 2023 - 05:32 PM

No!

Are you suggesting that I would not lower the exposure? LOL! Ok. 



#67 dciobota

dciobota

    Soyuz

  • *****
  • Posts: 3,742
  • Joined: 07 Aug 2007

Posted 21 September 2023 - 05:35 PM

Actually s/n is only part of the equation. In practice, it's a combination of s/n and also dynamic range. The reason is very simple, and that is actual imaging time. Downloading and storing a sub is not zero time, it does take a finite amount of time. A camera with a higher dynamic range allows for a longer exposures to be taken, therefore fewer subs to download.
Imagine a perfect zero read noise camera with just one bit dynamic range. You can indeed set your exposure to anything you want, preferably lower than the highest photon rate at the core of bright stars, and capture every single photon from faint objects faithfully... but probably at the cost of millions of subs. While the total exposure time will be less than say an asi2600, the actual capture time may well be in the order of years and take terabytes to store.
So while it's nice to play with theory, I always like to point out that practical aspects often need to be considered here as well.

Btw, I have an asi2600mc and shoot 5 min subs at f7, gain 100.

And to be pedantic and indeed more accurate, the process of amplification is actually a multiplication via junction effect in a transistor, so technically electron multiplication is correct, followed by digitization where the actual read noise is introduced. But now we're starting to split hairs.

Edited by dciobota, 21 September 2023 - 05:39 PM.


#68 Robert7980

Robert7980

    Soyuz

  • -----
  • Posts: 3,846
  • Joined: 20 Nov 2022
  • Loc: Western North Carolina

Posted 21 September 2023 - 05:49 PM

Wade, I understand the logic, but I do suspect the longer integration time of gain 0 does have more data. I guess the question you are proposing which option is better. My intuition tells me the following would be nearly equivalent.

 

Gain 0 - 240 subs at 120" (4 hrs) of luminance 

Gain 100 - 480 subs at 60" (same 4 hrs of data)

 

Intuitively I do not believe that 2 hours of gain 100 will equal the 4 at gain 0. Happy to be proven wrong. 

You’re assuming there is that much difference between gain 0 and 100, there isn’t. What it’s doing is scaling the pixel signal to the conversion stage differently. 
 

It’s going to get complicated soon… I know you hate that so I’ll get it wrong for simplicity… 

There are many problems with this approach. One is the wells aren’t full even at 120 seconds gain 100, so they will be very under utilized at gain 0, you’re taking a hit in the read noise department as well so you can’t really compare them 1 to 1… 

 

Gain 0 was designed for daylight terrestrial photography and I don’t see any real reason to use it for AP, it’s easier to just lower the exposure times if necessary. Some extremely bright objects are the exception… 

The photons (photo-electrons) collected by the pixel don’t care if you are at gain 0 or gain 100, so the data on the front end is the same. It’s how effective it matches that data up when it converts it to digital. It turns out that gain 100 is more effective at doing that, it digitizes the lower part of the signal with less noise and more separation (contrast)…

 

It’s the same as audio, you don’t want to record a soft signal with a higher noise preamp and then amplify it later. You’d really rather have the signal larger with less noise at the time of recording… It works exactly like that… The sound is the same it’s all about how clean you are laying the track down. Noise is noise and the signal is small those are two things you don’t want… 

 

I know it’s a rambling mess, but I decided to leave it all haha… 

 

I’m not sure why it keeps living on, but 4 hours of 30 seconds isn’t the same as 4 hours of 5 or 10 minute subs, it’s just not even close… This has been tested a lot and it never works out the way the book says… It’s a mistake that’s been repeated over and over and over… Short subs are fine, but they aren’t equivalent… Try shooting your 2600 for 300 seconds next time and have a look at the noise, you don’t need many of those… 

 

30x300 

get.jpg?insecure

 

72x120 

get.jpg?insecure

 

EDIT— Having technical difficulties with the confuser on this post, need to reboot… It’s a mess that probably should be deleted, I’m going to just let it be so go easy grin.gif


Edited by Robert7980, 21 September 2023 - 05:55 PM.


#69 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 05:57 PM

Are you suggesting that I would not lower the exposure? LOL! Ok. 

Yes, absolutely.

 

Are you suggesting that lowering the exposure time will improve this image in any significant way?  If so, could you please explain why?  I have already explained why those 748 saturated pixels are not a problem.  And I have shown that the important data is mostly very, very faint.

 

And I will toss in a caveat:  I claimed a few posts above that the image might be under exposed.  I am playing with the numbers now, and I retract that.  I think that it's swamping read noise just fine.

 

If anything, that underscores how good this sensor is.  The range of acceptable exposure times appears to be quite large.  There are lots of right answers, and you have to try pretty hard to get problematic exposure times.  It works well for those folks that want short exposures to lessen the work of the mount, and it also works well for those who want a smaller number of longer exposures.

 

And it does all of this with just one gain setting.  Nice.


  • Robert7980 likes this

#70 Robert7980

Robert7980

    Soyuz

  • -----
  • Posts: 3,846
  • Joined: 20 Nov 2022
  • Loc: Western North Carolina

Posted 21 September 2023 - 06:21 PM

To clarify here’s a test that’s easy… 

 

Take a 30 second sub and stretch the heck out it until you have a lot of noise… Betting the stars look like trash… 

 

Now take a 5 minute sub and put just enough stretch on it that the stars start to show… Not so bad eh… 

 

That is how to see the difference in practice… point is, a lot of it is in the post processing not necessarily the data the camera is capturing… Not always the case of course, and none of the methods or wrong, it’s just that shorter subs aren’t fixing a problem that isn’t there to begin with… There’s lots of reasons to go shorter, but overexposing the 2600 usually isn’t one of them… 

 

I’m pretty convinced the root of this topic for most is over stretching the data more than overexposing it, I know because I do it myself  grin.gif … Half a dozen little white dots on the linear image of pure black isn’t really a problem when you consider just how weak that poor noisy nebula is. In order to get rid of those 5 dots your going to have to kill your nebula or spend a whole bunch of nights going at it… What I do to fix the super bright stars is just take a few minutes of short exposures to get more of them under control and replace them later if needed… Mostly I just use the one set of data and throw my backup star shots out as the longer subs are usually fine… 

 

Next night out collect ten 300 second subs, stack them and post the stack and let’s see if we can’t get it looking ok… I’ll bet everything I hold dear that 30 minutes of 5 minute subs in B1 will blow your mind… 


Edited by Robert7980, 21 September 2023 - 06:23 PM.


#71 Drothgeb

Drothgeb

    Vanguard

  • *****
  • Posts: 2,091
  • Joined: 12 Jan 2022
  • Loc: Maryland

Posted 21 September 2023 - 06:35 PM

all this technical stuff is good.

 

could some of you experts please confirm Zambias finding, that the single sub at 120 seconds is not overexposed and clipping.

 

 https://drive.google...?usp=share_link

 

thanks Spaceman

I checked your sub in Affinity and couldn’t find any saturated pixels. They can be hard to see if it’s just individual pixels. I then checked it in Siril. It had 6 out of 17,000 stars as having saturated pixels. That’s certainly not a problem at all. I also checked your histogram, it shows you have room for more exposure time too. You are fine shooting 120 sec, and could go longer if you want. I’d probably go 180 sec.

 

A couple of months ago I was in a Bortle 2 area, 8500’ elevation, 11% humidity. Conditions similar to yours with crystal clear skies. I started out shooting 120sec with my 2600MC @ F4.5. Very few saturated pixels, I would have gone 180 sec, except it was really windy. Ended up dropping back to 60 sec, and still throwing half of them out.

 

Longer subs will yield better detail in the faintest areas. Consider taking long subs for the nebula, then shorter subs for the stars, and combining them in processing.


  • Spaceman 56 likes this

#72 bobzeq25

bobzeq25

    ISS

  • *****
  • Posts: 36,284
  • Joined: 27 Oct 2014

Posted 21 September 2023 - 07:38 PM

I am looking at RAW subs. they look very bright.

 

 

If your subs look bright, they're stretched.  Stretching alters brightness arbitrarily.  Stretch less, they'll be less bright.

 

Below is what an unstretched sub straight from the camera looks like.  ALWAYS.  It is dark, always.  Click on the picture to enlarge.

 

Bottom line.  The fact that the stretched subs look bright is MEANINGLESS.  Totally meaningless.  It simply depends on how much you're stretching.

 

What you need to know is - how many pixels are saturated?  I generally go for a few hundred.

 

unstretched light.jpg


Edited by bobzeq25, 21 September 2023 - 07:40 PM.

  • Spaceman 56 likes this

#73 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 07:46 PM

And I will toss in a caveat:  I claimed a few posts above that the image might be under exposed.  I am playing with the numbers now, and I retract that.  I think that it's swamping read noise just fine.And it does all of this with just one gain setting.  Nice.

OK, so my lesson for the day is to not try and multitask when one of the tasks is building software and the other is math associated with a completely different topic.  It's way too easy to mess something up, get weird numbers, and completely confuse yourself...

 

I found some time to take a look at just this, so here are the actual numbers regarding swamping of read noise.  As it happens, I have the same camera as the image we're looking at, and I run it at the same gain setting (according to the FITS header).  As it happens, I have already calculated gain and read noise for my specific camera, so I can use those numbers with numbers taken from the image.  Here is how I breaks down:

 

I like to use the background sky to assess noisiness of the image.  S/N in the bright object areas should always be good (if not, we need a different conversation), so it's not interesting for this analysis.  For the image in question, the median pixel value is 662.  This is an excellent proxy for the background sky.

 

Here are the measured numbers for my camera:

 

Gain: 0.252 e-/ADU

Read Noise: 1.522 e-

Read Noise: 6.040 ADU

 

Here are the numbers from the image.  I am converting to electron volts, since that's the domain where the math applies:

 

Median: 662 ADU = 166.824 e-

 

We know that shot noise is sqrt(signal), so here is the shot noise from the image:

 

Shot Noise: sqrt(166.824) = 12.916 e-

 

If we compare the ratio of shot noise to read noise, we get:

 

Shot Noise to Read Noise: 12.916:1.522 = 8.846:1

 

So for this particular exposure, we are swamping the read noise by almost 9:1.  It's not quite 10:1, but it's perfectly reasonable.


  • Spaceman 56 and Robert7980 like this

#74 WadeH237

WadeH237

    Voyager 1

  • *****
  • Moderators
  • Posts: 11,652
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 21 September 2023 - 07:57 PM

That is how to see the difference in practice… point is, a lot of it is in the post processing not necessarily the data the camera is capturing

I find that this confuses the issue.

When we are talking about reasonable exposure times for subs, we are very specifically talking about raw, uncalibrated and unprocessed subs.  Once you do any processing, all bets are off with regard to exposure.  At that point, we would be talking about processing, which is a wholly different subject. 
 

I’m pretty convinced the root of this topic for most is over stretching the data more than overexposing it, I know because I do it myself  grin.gif … Half a dozen little white dots on the linear image of pure black isn’t really a problem when you consider just how weak that poor noisy nebula is. In order to get rid of those 5 dots your going to have to kill your nebula or spend a whole bunch of nights going at it… What I do to fix the super bright stars is just take a few minutes of short exposures to get more of them under control and replace them later if needed… Mostly I just use the one set of data and throw my backup star shots out as the longer subs are usually fine…

I agree with this, and said as much in my very first contribution to this topic at post 34.

 

The rest of my argument is specifically around whether 120 seconds is too long for a sub from this camera.  And after the introduction of a sample image, all of my math and comments have been specifically in regard to that image.

 

The reason that I haven't just moved on, is that I think that there are some important concepts here that are frequently misunderstood and/or misapplied.  I am expecting that quite a few readers are hearing "Blah. Blah. Blah. Ginger. Blah. Blah. Blah." in my writing.  But I hope that it's helping to make things click for a few folks, and that it'll be here for future readers (or maybe some of the current readers, when they come back to it later).


  • Foc, Tkall, Spaceman 56 and 1 other like this

#75 unimatrix0

unimatrix0

    Cosmos

  • *****
  • Posts: 7,527
  • Joined: 03 Jan 2021

Posted 21 September 2023 - 08:11 PM

If your subs look bright, they're stretched.  Stretching alters brightness arbitrarily.  Stretch less, they'll be less bright.

 

I had a feeling that Spaceman was looking at an autostretched file. 


  • bobzeq25 and Spaceman 56 like this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics