Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Image Acquisition - Crazy Idea?

  • Please log in to reply
27 replies to this topic

#1 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 09 February 2019 - 03:41 AM

So its well known that there is a fine line between over and under exposing and you result greatly benefits the closer you are to it.

 - Go to long and you blow out galaxy cores, lose star color etc

 - Go to short and miss the fine detail

 

Is it a crazy idea to perform two different exposure runs and combine the data?

 

What I mean is this:

1. do a full session using the calculated exposure time and collect data as normal

2. then do a second full session going a longer than that to get additional detail

In session 2 you will obviously over-expose certain areas but those areas can be masked out in processing and the data from set 1 used there instead.

 

Not sure if processing can do the proper 'blending'.

 

 


  • calypsob likes this

#2 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3542
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 09 February 2019 - 04:35 AM

People do this.  However I think you overstate the downside of underexposing. Provided you dont take ridiculously short exposures, there's little penalty in terms of added read noise, and there are also advantages to more, shorter subs, such as better rejection of artifacts, probably better guiding etc.


  • psandelle, andycknight, Jon Rista and 2 others like this

#3 andycknight

andycknight

    Viking 1

  • -----
  • Posts: 625
  • Joined: 13 Aug 2010
  • Loc: UK

Posted 09 February 2019 - 04:54 AM

Yes, in the photographic world, its what they call HDR (high dynamic range) Photography.

 

In Astronomy its often called stacking, a single photograph might consist of...

 

20x 1 minute exposures

20x 10 minute exposures

maybe 5x 1 minute darks

plus say 5x 10 minute darks

and a few flats

 

The data can then be processed into a single image, using free software like 'Deep Sky Stacker'.

 

http://deepskystacke...lish/index.html

 

Regards

 

Andy.


  • MikeMiller likes this

#4 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 09 February 2019 - 05:39 AM

happylimpet
- Agreed on the advantages of short subs.....but all the short subs in the world wont collect as much light (photons) as a long one will.
I can take 10(or 100)x3 min subs and never get the photon count of 1x30.  I understand the guiding, light pollution, noise tradeoffs of short and long and tend to shoot shorter subs myself as a result.  But typically the over exposing of features limits my exposure times...especially when using hyperstar.

andycknight
- Yes, my initial post assumed each set (1 and 2) would be full series runs of multiple exposures and filters.  And I have imaged the same target across multiple nights, but always stuck with a single plan of exposure counts and durations...generally LRGB 3:1:1:1.  And when all data is collected I have processed using a typical workflow of calibrating, registering, stacking, etc.
What I am now considering though, is after capturing one complete series as described here, capturing another full series and combining the two of them.  I assume all the calibrations would need to be separate and then maybe I can register/ stack all of them together.  Not sure the right process to combine.....interested if anyone has experimented in this way.



#5 ks__observer

ks__observer

    Viking 1

  • *****
  • Posts: 860
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 09 February 2019 - 07:04 AM

happylimpet
- Agreed on the advantages of short subs.....but all the short subs in the world wont collect as much light (photons) as a long one will.
I can take 10(or 100)x3 min subs and never get the photon count of 1x30.

You will get statistically the same photon count.

But as Happy said, more read noise hits.



#6 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3542
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 09 February 2019 - 08:14 AM

happylimpet
- Agreed on the advantages of short subs.....but all the short subs in the world wont collect as much light (photons) as a long one will.
I can take 10(or 100)x3 min subs and never get the photon count of 1x30.  I understand the guiding, light pollution, noise tradeoffs of short and long and tend to shoot shorter subs myself as a result.  But typically the over exposing of features limits my exposure times...especially when using hyperstar.

 

Im afraid thats just wrong - you get exactly as many photons. They dont care what exposure times you're using when they rush into your telescope and get focused onto the sensor.


  • Jon Rista likes this

#7 WadeH237

WadeH237

    Skylab

  • *****
  • Posts: 4150
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 09 February 2019 - 09:04 AM

...but all the short subs in the world wont collect as much light (photons) as a long one will.

As it's been pointed pointed out, 100 x 1 second exposures will collect the same number of photons as 1 x 100 second exposure.

 

The difference is that you add read noise to the data 100 times in the first case, but only once in the second case.  Your camera's read noise has a very real impact on your ability to integrate many short exposures, with lower read noise being better.

 

If you had a camera with zero read noise, then you could take very short individual exposures and integrate them to get the same result as a single long exposure.  In fact, there are people doing this with something called an EMCCD camera, which amplifies the charge before readout to reduce read noise to an insignificant level.

 

Check out this page for a real world example of what you can do with an EMCCD.



#8 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 09 February 2019 - 09:21 AM

Interesting, as this hasn't been my experience.

I *always* get more detail from fewer longer exposures than many short (total time being equal)

Must be something else at play here.



#9 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 09 February 2019 - 11:10 AM

So I was swirling on this a bit more during my morning coffee....

 

Lets say I take a 60sec sub and a particular pixel near the center of a galaxy registers 60000 ADU (out of the 65535 available).

Taking a 30 sec sub would, I believe, register around 30k...or close to it with similar equipment/conditions.

 

Now assume we have subs of each to total the same overall time..........

Stacking the two 60s subs would average back 60k for that pixel (ignoring noise and other variables). 120/2

Stacking the four 30s subs would average back 30k for that same pixel. 120/4.

What am I missing here?

Thanks



#10 ks__observer

ks__observer

    Viking 1

  • *****
  • Posts: 860
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 09 February 2019 - 11:12 AM

Interesting, as this hasn't been my experience.

I *always* get more detail from fewer longer exposures than many short (total time being equal)

Must be something else at play here.

I have seen others make the same observation -- that there seems to be a discrepancy between theory and practice -- more than just the additional read noise.

Hope we get an answer at some point ....



#11 MikeMiller

MikeMiller

    Viking 1

  • -----
  • Posts: 577
  • Joined: 22 Jul 2014
  • Loc: Pittsburgh, PA, USA

Posted 09 February 2019 - 11:23 AM

When you are using fast optics on a bright target such as Hyperstar on M45 or M42, you almost need to do this. It's the best way to correctly expose the bright core while getting the fainter nebulosity.

As with everything astrophotography, the real answer is "it depends".

#12 WadeH237

WadeH237

    Skylab

  • *****
  • Posts: 4150
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 09 February 2019 - 11:47 AM

So I was swirling on this a bit more during my morning coffee....

 

Lets say I take a 60sec sub and a particular pixel near the center of a galaxy registers 60000 ADU (out of the 65535 available).

Taking a 30 sec sub would, I believe, register around 30k...or close to it with similar equipment/conditions.

 

Now assume we have subs of each to total the same overall time..........

Stacking the two 60s subs would average back 60k for that pixel (ignoring noise and other variables). 120/2

Stacking the four 30s subs would average back 30k for that same pixel. 120/4.

What am I missing here?

Thanks

You are missing two things:

 

  • You don't have to restrict yourself to integer math to manipulate the data.
  • The pixel values coming out of the camera don't matter.

 

As an example, I think that we can all agree that PixInsight is capable of exceptional processing of astro images.  Consider that it uses floating point math and stores all pixel values in the range of 0 to 1.  The absolute brightest pixel that it can manipulate has a value of 1.  Absolute black has a value of 0.  All of the dynamic range of an image in PixInsight is made of pixel values between these two extremes of 0 and 1.

 

What really matters is the signal to noise ratio of the data.  You can render the final result as bright or as dim as you want, regardless of the pixel values.



#13 WadeH237

WadeH237

    Skylab

  • *****
  • Posts: 4150
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 09 February 2019 - 11:53 AM

Interesting, as this hasn't been my experience.

I *always* get more detail from fewer longer exposures than many short (total time being equal)

Must be something else at play here.

I don't know what your camera's characteristics are, what your data looks like, or how you process it.

 

I can say that I routinely see rich detail in my calibrated and integrated data that is completely invisible by eye in individual sub exposures.

 

Also, keep in mind that unless you have exotic equipment like I liked above, your read noise is does not approach zero.  It also occurs to me that the page I linked to above talks about the gear, but does not link to any finished images.  Check out this link to see some of the photos that they've taken with this setup.



#14 jerahian

jerahian

    Mariner 2

  • *****
  • Posts: 249
  • Joined: 02 Aug 2018
  • Loc: Maine

Posted 09 February 2019 - 12:40 PM

The exposure length is largely dependent on the camera you are using.  Cameras with high read noise will need longer exposures, whereas cameras with shorter read noise can do much shorter exposures.  As a matter of fact, the latter will have rapidly diminishing returns for greater exposure lengths, and would be susceptible to problems which arise from longer exposures, such as satellite and airplane trails, wind gusts, etc.  Modern sensors have gotten so good with regard to read noise and QE that you see a trend towards shorter exposure lengths.  For these, such as the 1600, your benefit is greatly reduced after about 4-5 mins.



#15 happylimpet

happylimpet

    Soyuz

  • *****
  • Posts: 3542
  • Joined: 29 Sep 2013
  • Loc: Southampton, UK

Posted 09 February 2019 - 12:49 PM

So I was swirling on this a bit more during my morning coffee....

 

Lets say I take a 60sec sub and a particular pixel near the center of a galaxy registers 60000 ADU (out of the 65535 available).

Taking a 30 sec sub would, I believe, register around 30k...or close to it with similar equipment/conditions.

 

Now assume we have subs of each to total the same overall time..........

Stacking the two 60s subs would average back 60k for that pixel (ignoring noise and other variables). 120/2

Stacking the four 30s subs would average back 30k for that same pixel. 120/4.

What am I missing here?

Thanks

You're right that the AVERAGE will be less for each exposure, but the total number of photons counted will be the same. and this is what matters, not the average in each sub. If your stacking software is providing the average then it will give lower numbers for each pixel, but then the noise will be lower too. In software you can multiply an image by 2, or divide by 603, or whatever, and have exactly the same image (after a different screen stretch).

 

What matters is how many photons you have in each pixel, and the total noise in each pixel. With shorter subs you get the same number of photons, a lower average per sub (which doesnt matter a fig) and slightly more total read noise.



#16 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22896
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 09 February 2019 - 01:28 PM

Interesting, as this hasn't been my experience.

I *always* get more detail from fewer longer exposures than many short (total time being equal)

Must be something else at play here.

What camera are you using? This depends a lot on how much read noise the camera has. Some cameras just require longer exposures, while others allow and even require shorter exposures.

 

Noise is an interesting thing, in that it is at it's core a description of random and unknown variations around an expected value. The interesting thing about random and unknown variations is when you combine one random variation with another, the larger variation will dominate the overall variance from the expected measure. When we say we have noise of say 10e-, we are talking about a mean deviation from this expectation, but actual variation could be larger or smaller. This deviation could also occur as a positive or a negative relative to the expectation. If we have another noise, of say 2e-, this noise will also result in random, unknown deviations, however these will combine with the larger noise term. We cannot simply add 2e- to 10e- and say we now have 12e- read noise, though. The deviations are random, unknown, and can be positive or negative. We might have a +10e- deviation combined with a -2e- deviation, giving us a total deviation of 8e-. Or we might have a +5e- deviation combined with a +2e- deviation for a total of 7e-, o even +10 and +2 for 12e-. Or we might have a -2e- deviation combined with a +2e- deviation giving us a total deviation of ZERO! Noise terms add in quadrature, which accounts for this odd way in which noise terms combine.

 

If we combine a 10e- noise with a 2e- noise, we must first square these, then add them, then take the square root:

 

Ntotal = SQRT(10^2 + 2^2) = SQRT(100 + 4) = SQRT(104) = 10.2e-

 

By combining our 2e- noise term with our 10e- noise term, the "impact" of the 2e- noise has been greatly diminished!! In effect, it represents an actual impact to the total noise in our image of a measly 0.2e-!!

 

Because of this interesting aspect of noise, we can conclude that there comes a point of diminishing returns on exposure. If we continue to expose well beyond the point where one particular noise term...that being the total shot noise from light entering the scope in our background signal...then other noise terms effectively lose their "impact", they are effectively meaningless. We call this "swamping", and we usually refer to read noise as the term to be swamped. We target read noise, because we get one "unit" of read noise in every single sub exposure, it is count-dependent. ALL other noise terms...object shot noise, skyfog shot noise, even dark current shot noise...are purely time-dependent. It matters not how many subs you have, for a given total integration...say 5 hours...these other noise terms will sum up the same no matter whether you acquire a single 5 hour exposure, or 18000 1 second exposures. In fact, you can combine all the shot noise terms, and if they are together sufficient to swamp read noise, then read noise is swamped. Dark current is usually trivial overall, so usually we just use the background sky signal to determine whether we have swamped read noise. And, this swamping needs to be done on a sub-exposure basis...you need to expose enough to swamp read noise EVERY SUB.

 

So, how long you MUST expose for depends very much on how much read noise you have. A commonly used "swamp factor" is 10xRN^2. This means, you want the background sky signal to be 10x as large as the read noise squared. So, if you have a camera that has, say, 10e- read noise, then you want your background sky signal to be 10x stronger than 10^2, which is 10x100, which is 1000e-! That is quite a lot of signal, and depending on your image scale and aperture, it may indeed take a very long time before you expose enough signal to achieve this goal. On the other hand, if you read noise is just 2e-, then you need a background signal of 10x2^2, which is 10x4, which is just 40e-. Purely from a background signal standpoint, assuming all else being equal, you would need exposures just 4% as long as those necessary for a 10e- read noise camera. It may be that you only need 5 minute exposures with 2e- read noise, but 125 minute exposures with 10e- read noise. Things get more complex once you start considering differences in pixel size...and even more complex when you start considering differences in aperture and focal length. Even with low read noise, imaging with a smallish aperture at f/10 might still require very long exposures... Conversely, imaging with a very big aperture at f/2 with high read noise might require very short exposures.

 

In the end, what matters is that you make the backround sky shot noise a much larger noise term than the read noise. If you do that, then you will have exposed sufficiently enough to render read noise largely moot. Interesting thing about stacking...the stack will have the same relative difference as a single sub. Previously, I demonstrated that swamping 2e- read noise with 10e- shot noise reduced the impact of the read noise to a mere 0.2e-. This ratio would hold in a stack. If we stacked 100 subs, we would have: SQRT(100 * (10^2 + 2^2)) = 102e-. This noise is exactly 10x larger than the single sub, and the relative impact of read noise is the same...2e- vs. 100e-, 0.2e- vs. 10e-, it's 0.02 either way.


  • Stelios, andycknight, zackyd and 3 others like this

#17 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22896
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 09 February 2019 - 01:44 PM

So I was swirling on this a bit more during my morning coffee....

 

Lets say I take a 60sec sub and a particular pixel near the center of a galaxy registers 60000 ADU (out of the 65535 available).

Taking a 30 sec sub would, I believe, register around 30k...or close to it with similar equipment/conditions.

 

Now assume we have subs of each to total the same overall time..........

Stacking the two 60s subs would average back 60k for that pixel (ignoring noise and other variables). 120/2

Stacking the four 30s subs would average back 30k for that same pixel. 120/4.

What am I missing here?

Thanks

The average here is a bit misleading. You need to account for noise as well as signal, so SNR. Assuming you have sufficiently swamped the read noise with both exposures (which would most assuredly be the case if the core of a galaxy reached 60000 ADU!!! That would be MASSIVE signal...), then the two would have basically the same SNR. Lets just assume for the moment that we have a gain of say 0.1e-/ADU, and a bias offset of zero (just for simplicity sake). That would mean a 60000 ADU measure represents 6000e- signal. This is a HUGE signal, massive! Just to be clear. ;) So the shorter exposures would have 3000e- signal. Lets say the camera has 10e- read noise (very high read noise.)

 

SNRl = (2*6000)/SQRT(2 * (6000 + 10^2)) = 12000/SQRT(12000 + 200) = 12000/SQRT(12200) = 12000/110.5 = 108.6:1

SNRs = (4*3000)/SQRT(4 * (3000 + 10^2)) = 12000/SQRT(12000 + 400) = 12000/SQRT(12400) = 12000/111.4 = 107.8:1

 

While you can see that stacking twice as many subs with half the signal each results in more total read noise, the difference in SNRs in the end is a measly 0.74%! Totally and utterly meaningless. ;) Now, if we get down to a more realistic example, with say 500e- background signal rather than 6000e-.

 

SNRl = (2*500)/SQRT(2 * (500 + 10^2)) = 1000/SQRT(1000 + 200) = 1000/SQRT(1200) = 1000/36.6  = 27.5:1

SNRs = (4*250)/SQRT(4 * (250 + 10^2)) = 1000/SQRT(1000 + 400) = 1000/SQRT(1400) = 1000/37.4 = 26.7:1

 

The impact of read noise here is a little more obvious...however, in the end, it still amounts to a mere 3% difference in SNR. I would still call that largely meaningless. But, what if we assumed that the camera we used shorter exposures with, also had less read noise? Lets say it had 7e- read noise, rather than 10e-:

 

SNRl = (2*500)/SQRT(2 * (500 + 10^2)) = 1000/SQRT(1000 + 200) = 1000/SQRT(1200) = 1000/36.6  = 27.5:1

SNRs = (4*250)/SQRT(4 * (250 + 7.07^2)) = 1000/SQRT(1000 + 200) = 1000/SQRT(1200) = 1000/36.6 = 27.5:1

 

Well...now our SNRs are exactly the same! Despite the fact that we used exposures twice as long with one of the cameras. Now, if you compared a single longer exposure to a single shorter exposure...then, no matter how you slice it, the longer exposure will look better. It simply has more signal! And even with higher read noise, it's SNR is still going to be higher. But in effect here, you are comparing apples to oranges. A fair comparison can only be achieved if you normalize the signals. Stack two of your short subs and compare them to the single longer sub...and the idea that a longer exposure "always" reveals more detail will quickly shatter. Further, if you are not sufficiently swamping read noise with a camera that has high read noise, and are able to more than sufficiently swamp it with a camera that has lower read noise, then there is the potential for the lower read noise camera to outperform the higher read noise camera on an equal-integration basis. You might be able to swamp 10e- read noise by 3x with long exposures, but it becomes relatively trivial to swamp 1-2e- read noise by 8-12x with rather short exposures. If you stacked 5 hours of data from both cameras, the low read noise camera will outperform here...it'll be capable of picking up more fainter details with higher SNR. The margin will not be huge, but there will be a margin, and it'll favor lower read noise.

 

I think a mistake a lot of people make when comparing cameras is to compare single subs out of camera. This is about the most unrealistic comparison you can make, and is bound to skew your opinion of the results. Compare "like integrations", calibrated stacks of subs of the same total amount of exposure (or as close as you can get) from each imaging system, and THEN you'll get a clearer idea of how two systems compare.


  • psandelle and zackyd like this

#18 calypsob

calypsob

    Aurora

  • *****
  • Posts: 4500
  • Joined: 20 Apr 2013

Posted 10 February 2019 - 04:23 AM

So its well known that there is a fine line between over and under exposing and you result greatly benefits the closer you are to it.

 - Go to long and you blow out galaxy cores, lose star color etc

 - Go to short and miss the fine detail

 

Is it a crazy idea to perform two different exposure runs and combine the data?

 

What I mean is this:

1. do a full session using the calculated exposure time and collect data as normal

2. then do a second full session going a longer than that to get additional detail

In session 2 you will obviously over-expose certain areas but those areas can be masked out in processing and the data from set 1 used there instead.

 

Not sure if processing can do the proper 'blending'.

It really does depend on the camera that you use to attempt this.  I usually see people doing it with DSLR.

I can think of a few astrophotographers off hand that combine different exposures lengths, intentionally blowing highlights to get a stronger background signal and then shooting shorter subs to smoothly restore highlight details.

 

Examples

 

Mario Cogo http://www.galaxlux....ieldGallery.htm

 

Hisayoshi Kato "Hirocun" https://www.flickr.com/photos/hiroc/

He does some insanely long exposures, in some images he blends in 30 minute osc color exposures.

 

Yuriy Toropin https://www.flickr.c...ith/8020365385/


Edited by calypsob, 10 February 2019 - 04:24 AM.


#19 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 12 February 2019 - 09:49 PM

Jon....thanks for the comprehensive reply

 

Ok...so I get that the SN ratio is more important than just the signal alone.
It makes sense that the actual pixel value independently means less than what is
"IN" that pixel in terms of good (signal) and bad (noise) information.

 

I'm using an Atik 460EX on an EdgeHD14 scope.  Sometimes I use Hyperstar and other times
I shoot off the back.

 

It seems from your explanation that I should be considering a 250e- background value in my images (chip read noise = 5e-).  Does this translate to a background ADU of 2500?


Edited by jsowens, 12 February 2019 - 10:49 PM.


#20 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22896
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 12 February 2019 - 11:43 PM

Jon....thanks for the comprehensive reply

 

Ok...so I get that the SN ratio is more important than just the signal alone.
It makes sense that the actual pixel value independently means less than what is
"IN" that pixel in terms of good (signal) and bad (noise) information.

 

I'm using an Atik 460EX on an EdgeHD14 scope.  Sometimes I use Hyperstar and other times
I shoot off the back.

 

It seems from your explanation that I should be considering a 250e- background value in my images (chip read noise = 5e-).  Does this translate to a background ADU of 2500?

What is the gain in e-/ADU of that camera? If it is 0.1e-/ADU, then yes, 250e- would mean 2500 ADU. It depends, though, what the gain actaully is...0.1e-/ADU is really high...



#21 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 13 February 2019 - 07:35 AM

Looks like .27e-/ADU

 

So following the 10xRN^2 gets me the 250e- 'swamp factor' but I'm not sure how you get to ADU

Is it 250e- * gain * 100?

If so I guess my ADU target would be 6750 with my camera

 

 

1.JPG


Edited by jsowens, 13 February 2019 - 08:32 AM.


#22 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22896
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 13 February 2019 - 05:07 PM

ADU = Signal / Gain + Bias

 

So, if you have 250e- signal, a bias offset of say 500, and gain of 0.27e-/ADU, then:

 

ADU = 250 / 0.27 + 500 = 926 + 500 = 1426 ADU



#23 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 16 February 2019 - 12:16 PM

Can you elaborate a little on the "bias offset"

Is this a camera characteristic?

A sky/image characteristic?

 

Thanks


Edited by jsowens, 16 February 2019 - 12:26 PM.


#24 Jon Rista

Jon Rista

    ISS

  • *****
  • Posts: 22896
  • Joined: 10 Jan 2014
  • Loc: Colorado

Posted 18 February 2019 - 11:56 AM

Can you elaborate a little on the "bias offset"

Is this a camera characteristic?

A sky/image characteristic?

 

Thanks

Every camera has an offset that shifts the base signal level above 0. This is necessary because noise will result in the signal oscillating around the mean, both positive and negative. If the mean is 0, then any negative values get clipped. So a non-zero bias offset is usually applied during readout to ensure that noisy pixels cannot clip to black.

 

There are some exceptions here. Some Nikon DSLRs and Sony MILCs do not use a bias offset, and instead purposely clip. This tends to be problematic, because it is throwing away necessary information to successfully calibrate. 

 

What camera are you using? 



#25 jsowens

jsowens

    Vostok 1

  • -----
  • topic starter
  • Posts: 132
  • Joined: 20 Feb 2014

Posted 18 February 2019 - 01:04 PM

Atik460EX




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics