Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Beginner AP Concepts and Principles

  • Please log in to reply
27 replies to this topic

#1 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 17 January 2020 - 05:46 PM

Here is my 1st cut at a list of AP Concepts and Principles that have learned in my 1st year of doing AP (sorry about the long list, is much more readable in Word):

 

1) For imaging, I am concerned mainly with Field of View (FoV) for composition, image scale for detail of small targets, and Signal to Noise Ratio (SNR).
2) Deep Sky Objects (DSO) have a wide range of sizes, and no one setup can fit all of these targets.  These range from Very Small: planetary nebulas (M57, 1'x1'); Small: most galaxies (M51, 11'x6'); Medium: most nebulas (M42, 90'x60'); Large: M31 Andromeda (180'x40'), North American + Pelican Nebulas (120'x120'), Cygnus Loop (180'x180').  By contrast the Sun and full Moon, are ~30' in diameter.  Planets are very small, but brighter.
3) Not all targets are visible at any location, depending on the viewer’s latitude, time of the year (based on the DSO’s Right Ascension, RA, and viewer’s longitude), horizon obstructions (trees, buildings, mountains), and sky darkness (Bortle scale).
4) It is best to image DSOs when they have reached an altitude of >30deg, in order to minimize the amount of atmosphere the light has to penetrate.  Best viewing is when the DSO is around the meridian, as it will be at its highest point in the sky.
5) A particular DSO’s altitude at the meridian is determined by its Declination (Dec) measurement compared to the viewers Latitude.  Meridian Altitude = IF(Dec>LAT,90-Dec+LAT, Celestial Equator + Dec), all in degrees.  The Meridian Altitude will be at the Zenith (directly overhead) if the DSO’s Dec equals the viewer’s Latitude.  Celestial Equator is 90 minus the viewer’s Latitude. 
6) FoV is determined by the combination of the scope's focal length (FL) and the sensor's sensor size.  You can get larger FoV by a shorter FL and/or larger sensor size.
7) Image Scale (arcsec/px) is determined by the combination of the scope's FL and the sensor's pixel size.  You can get finer image scale (smaller number, more resolution) by a longer FL and/or smaller pixel size.
8) There is no advantage to a finer image scale than determined by Nyquist Critical Sampling, as it results in no extra achievable resolution.  Nyquist Critical Sampling is rate (frequency no higher than, or arcsec/px no smaller than) at which ALL of the embedded information (filtered signal) can be reconstructed.
9) Nyquist Critical Sampling is a fraction of the size of the finest detail (or multiple of the highest frequency component) of the filtered signal.  I refer to this divisor of size or multiplier of frequency as the Nyquist Factor.  Estimates of this Nyquist Factor are between 3 to 4 for 2-D images, and 2 for 1-D waveforms.
10) An estimate of filtered signal's finest detail is the smallest measured FWHM of any star.  The true signal is filtered (or blurred) by seeing conditions, resolving power of the telescope, and the tracking ability of the mount. 
11) Critical Sampling ~ SQRT(Seeing^2 + Dawes^2+ ...) / Nyquist Factor.  (I am not sure whether to add the mounts error in quadrature into the prior equation, or to treat the equation as the limit with a theoretically perfect mount and degrade the estimate, since seeing conditions would also show up in the measurement of the guiding accuracy.  This could also be the reason for the Nyquist Factor range.)
12) Seeing is a measure of blurring caused turbulence in the Earth's atmosphere, measured in arcsec.  This is what causes stars to “twinkle”, like the image of a coin on the bottom of a swimming pool jiggles with the ripples on the water surface.
13) Dawes Limit is a measure blurring due to wave diffraction caused by size of the telescope’s aperture edge, measured in arcsec.  The larger the diameter of the scope the sharper the image, and the lower the Dawes Limit.  Dawes Limit = 116 / Aperture Diameter, in mm.  
14) Reflecting telescope’s central obstruction (CO) will blur the image by more than estimated based on its aperture, as the CO (and supporting vanes) will take more of the light intensity out of the center airy disk peak and into subsequent bands, making stars larger.  Supporting vanes from Newtonian telescopes will also move some of the light intensity into diffraction spikes.
15) I do not worry about undersampling (within reason) when imaging larger targets, as my limiting factor is fitting the target in the FoV of the sensor.
16) SNR = Signal / SQRT(Signal + Skyfog + DC + RN^2);
17) Skyfog is sky brightness caused by Light Pollution or the Moon’s illumination.
18) Skyfog is a large percentage of the noise, and it causes the peak in the histogram, and is also the median number.  Much better SNR is achieved in darker skies (low Bortle scale), and good results can be attained at much shorter total integration time than in urban environments (high Bortle scale).
19) Dark Current (DC) increases as the camera sensor gets hot, and this heat increases the longer the exposure.  This is why cooled cameras are essential for DSO photography.  Non-cooled cameras will require many more, but shorter exposures, to keep this DC in check.  Cooling is not necessary for the short exposures typical of solar, lunar, and planetary photography.
20) Read Noise (RN) is the electronic noise caused by the camera moving the image from the sensor to the saved photo file.  This is composed of random variability and fixed pattern noise.
21) Shot Noise (SN) is the statistical variability in the actual flow of photon from the target.  Photons arrive in quantized packets, but this flow is not constant.  The longer the total integration time the captured signal converges to the true signal.  SN shows up as speckles in the darker parts of the image, instead of the true smooth background.
22) Quantization Error is caused by differences between the number of the ADC’s intensity levels (e.g. 12-bit has 4,096, 14-bit has 16,384, 16-bit has 65,536) and the FWC of the sensor.  This can cause differing photon counts to convert into the same output number.
23) SNR of a single DSO sub is very low, with signal just above the noise.  This why many subs are needed, as signal accumulates linearly, while the noise accumulates as a SQRT, with the exception of RN, which also accumulates linearly.
24) F/ratio is the ratio of the scope’s FL divided by its aperture diameter.  Faster f/ratio scopes (lower f/) are beneficial as they give each pixel more light per unit of time.  Fast f/ratio scopes have wide diameter objective lenses (or mirror) compared to the scopes length.  Fast f/ratio scopes are expensive and heavy.
25) SNR is determined by the total integration time, f/ratio and the transmission efficiency of the telescope and image train, and the QE of the camera sensor. 
26) Transmission efficiency will be determined by the number of lens surfaces, quality of the lenses, quality of the mirrors, quality of the lens/mirror coatings, and degree of central obstruction.  Generally, telescopes will have higher transmission efficiency than camera lenses.
27) SNR increases with the SQRT(total integration time).  This is achieved with the combination of the length of the individual sub exposures, and the number of subs.  SNR improves in a geometric fashion, with the same improvement going from 1 sub to 2 subs, as from 2 to 4, 4 to 8, …, 100 to 200.
28) Aperture affects the SNR through its impact on the f/ratio.
29) F/ratio "speed" can be achieved by a larger aperture (with the same FL) or shorter FL (with the same aperture), or a combination of both.
30) A “slow” f/ratio scope (high f/) will need longer total integration time than a “faster” scope.
31) The minimum desired sub length, is a one in which the all of the sensors pixels have values > 0 (no black clipping). 
32) The minimum desired sub length is driven by the camera’s read noise (RN).  Cameras with higher RN need longer subs in order to minimize the accumulation of RN in the entire stack, with the desire to “swamp the read noise”. 
33) The maximum desired sub length is driven by the scopes f/ratio, the camera’s Full Well Capacity (FWC), brightness of the target, darkness of the sky, and the tracking/guiding ability of the mount.  Longest sub lengths are possible with: darker skies, dimmer targets, high f/ratio scopes, cameras with a large FWC, and high quality mounts that are properly adjusted and polar aligned.
34) The ideal exposure time takes advantage of the sensor’s full dynamic range, with all pixels have values > 0 (no black clipping), and zero (or very few) pixels having a value of 1 (white clipping).  Viewing the histogram, the line starts after the left edge, peaks (skyfog) in the left side (1/4 to 1/3 ideal), and trails off rapidly to the right edge. 
35) CCD cameras generally have higher RN than CMOS camera, and typical CCD’s FWC is higher than a CMOS camera.  This lead to exposure length rules of thumb significantly longer for a CCD camera than for a CMOS camera.
36) Dark calibration frames, with exactly matched camera gain (or ISO) and exposure length, characterize the camera DC.  Many dark frames are captured (with lens cap on) and averaged.  The average dark frame is subtracted from the averaged Light frame (actual photo of target) to remove the effect of DC.
37) Bias calibration frames have very short exposure times and characterize the camera RN.  Many bias frames are captured (with lens cap on) and averaged.  The average bias frame is subtracted from the averaged Light frame (actual photo of target) to remove the effect of RN.  Bias (or Dark Flats) is also subtracted from Flat frames.
38) Flat calibration frames characterize the optical distortions in the image train.  Many flat frames are captured (with an evenly illuminated target) and averaged.  The average flat frame is subtracted from the averaged Light frame (actual photo of target) to remove the effects of optical distortions.  These distortions include vignetting (darkening in the edges/corners) and smudges on the lens/filters, and dust on the sensor.
39) Larger apertures will increase exposure of point sources but not extended objections, assuming the same f/ratio.
40) Guiding/tracking accuracy is reduced by: heavier loads, long moment arm loads, large surface area loads (wind resistance), imbalanced loads, fine image scales, and mechanical flexure.
41) There are no meaningful advantages to binning on the camera for CMOS, other than smaller files and faster downloads (not an issue with DSOs).  Any resolution reduction could be done in software by various means, with the advantage of being able use various degrees of the full resolution.  If you bin at the camera level, the incremental resolution is lost forever.

 

Comments?

 

Additions?


Edited by SilverLitz, 18 January 2020 - 12:59 PM.

  • schmeah, shortonjr and jerobe like this

#2 Madratter

Madratter

    Voyager 1

  • *****
  • Posts: 11,727
  • Joined: 14 Jan 2013

Posted 17 January 2020 - 06:27 PM

Wow, what a list. No wonder people find this hobby daunting at times. smile.gif

 

I would actually quibble with some of it. For example #38 - flats are not just about optical distortions in the image train. They are also about differing pixel responses. A lot of people are unaware of that, but it is still true.

 

EDIT:

 

Here is an example of what a flat from my system looks like in Ha.

VoyagerSkyFlatsExample.JPG

 

Notice the uneven pixel response to the left side of the image. Flats help correct that.


Edited by Madratter, 17 January 2020 - 09:03 PM.

  • bobzeq25 and SilverLitz like this

#3 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 18,538
  • Joined: 27 Oct 2014

Posted 17 January 2020 - 06:39 PM

Leaving off differences in style/emphasis.

 

15.  Skyfog is sky brightness.  The cause is irrelevant.

 

34.  You'll almost always clip some stars high.  Adopting zero or few pixels at full well depth is an exercise in futility.

 

36, and in general.  Things like dark current have two components, an average value, and random variability around that average.  If you want to talk about "removing" such things, you need to differentiate.   See Woodhouse, who covers this extremely well.  Note that I have not used the n word, its use tends to produce much meaningless frenzy.

 

38.  Flats also correct for uneven pixel response.

 

40.  Guiding accuracy is reduced (in rough order of importance) by long focal lengths, poor seeing, weight, and skill at configuring the autoguiding program.   Flexure between the guide scope and main optical train is a problem (and a nasty one), and must be prevented by mounting the guide scope rigidly or using an off axis guider.

 

41.  Just wrong.  The main advantage of binning is to trade some resolution (which may be only theoretical) for improved signal to noise ratio.  It works for both CCDs and CMOS.  CCDs have an additional (smaller) advantage of reducing read noise by binning, CMOS does not have this added advantage.

 

Additions.

 

Unlike visual, the mount is more important than the scope.  Costs reflect that.

 

Things are almost always more complicated than you think.  The "best" this and that is usually the best personal compromise for an individual.

 

The best equipment and procedures for a beginner to learn AP with will most likely not the same as the best equipment and procedures for an experienced imager.  Run any advice you get here through that filter.

 

More total imaging time is more important than working on getting the subexposure time "better".  Subexposure time just needs to be in the ballpark.

 

If your budget is low, starting out with a camera and a lens is an excellent idea.  A scope adds cost (both money and time), fast.  The bigger the scope, the bigger the cost added.  It's not linear, things get out of hand fast.

 

Even when starting out, do not omit the camera calibration frames; bias, flats, darks.  Doing so will likely lead to bad habits in processing.  Processing is tough enough without having to unlearn bad habits.

 

Skill in processing is more important than what program you use.


Edited by bobzeq25, 17 January 2020 - 06:50 PM.

  • TimN, bobharmony and SilverLitz like this

#4 descott12

descott12

    Vendor - Solar Live View

  • -----
  • Vendors
  • Posts: 1,572
  • Joined: 28 Sep 2018
  • Loc: Charlotte, NC

Posted 17 January 2020 - 07:22 PM

Very nice list. You have learned alot in a year! This list is the reason I do EAA, not AP!

 

38. I believe flats are not subtracted, but rather they are divided?


  • ks__observer and SilverLitz like this

#5 TelescopeGreg

TelescopeGreg

    Apollo

  • -----
  • Posts: 1,341
  • Joined: 16 Jul 2018
  • Loc: Auburn, California, USA

Posted 17 January 2020 - 07:31 PM

Related to #4, and depending on your mount, it's also best to not image when either axis of the mount is vertical (crosses east/west or north/south).  When they are, the associated gear train may not remain meshed, and that can reduce guiding accuracy because of the inherent looseness (backlash) in the mechanism.  This is especially true of the Celestron AVX mount, where overhead imaging is fraught with peril.  Purposely mis-balancing the scope can help minimize the issue.


  • SilverLitz likes this

#6 shortonjr

shortonjr

    Explorer 1

  • -----
  • Posts: 57
  • Joined: 04 Apr 2015
  • Loc: south carolina

Posted 17 January 2020 - 09:03 PM

WOW!

I think I feel a sticky note on this one...

Very nice list, very well put together and thank you for sharing your knowledge!


  • SilverLitz likes this

#7 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 17 January 2020 - 10:54 PM

Wow, what a list. No wonder people find this hobby daunting at times. smile.gif

 

I would actually quibble with some of it. For example #38 - flats are not just about optical distortions in the image train. They are also about differing pixel responses. A lot of people are unaware of that, but it is still true.

 

EDIT:

 

Here is an example of what a flat from my system looks like in Ha.

attachicon.gifVoyagerSkyFlatsExample.JPG

 

Notice the uneven pixel response to the left side of the image. Flats help correct that.

On the resolution of the laptop that I am currently using (and CN LoRes requirement), I cannot see the small structure variation.  I definitely see irregular vignet, and I see that on my flats as well.

 

My Ha flat (w/ massive stretch) is even stranger as the dark ovalish thick band in noticeably inside the frames edge.  Sii is also somewhat similar, but Oiii looks more like RGB flats.  I assume it is wave interference patterns, somewhat akin to sound "comb" filtering, and standing wave peak/valleys through out a room, which show up in low frequencies, long wavelengths.  It shows in Ha and Sii, as their wavelengths are similar.  The RGB filters have much larger passbands, so their wave interference patterns will get averaged out.  This is totally my own synthesized intuition, and may be off base.

 

I have also heard about the individual pixel sensitivity response, though my very limited experience looking at flats, it seems that flats have been significantly spacially averaged, for larger structure, not pixel level.  Though any irregular pixel sensitivity response to the extent it has a larger structure would be adjusted.  Again I have not specific knowledge other than intuition.  In the writing of the PI (other sw as well) code, does the master flat perform a 2-D average of the average of the individual flats?  If PI does not perform this 2nd averaging (2-D of pixel x,y averaged with surrounding pixels), then the individual pixel sensitivity response adjustment would indeed occur.


Edited by SilverLitz, 17 January 2020 - 11:00 PM.


#8 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 17 January 2020 - 11:07 PM

Very nice list. You have learned alot in a year! This list is the reason I do EAA, not AP!

 

38. I believe flats are not subtracted, but rather they are divided?

Yes, it is sloppy language, it should be more reducing the effects .... 

 

I believe flats are normalizing the lights by multiplying the light's x,y pixel by the flats average response / flats x,y pixel response.  It is a scaling (mainly division), not a subtraction.


Edited by SilverLitz, 17 January 2020 - 11:15 PM.


#9 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 17 January 2020 - 11:14 PM

Related to #4, and depending on your mount, it's also best to not image when either axis of the mount is vertical (crosses east/west or north/south).  When they are, the associated gear train may not remain meshed, and that can reduce guiding accuracy because of the inherent looseness (backlash) in the mechanism.  This is especially true of the Celestron AVX mount, where overhead imaging is fraught with peril.  Purposely mis-balancing the scope can help minimize the issue.

Yes, I should have mentioned balancing East heavy.  I do that with a small velcro'ed weight on the counterweight when the scope is on the West side of the pier, and remove the weight after the meridian flip.

 

My emphasis on #4 had to do with darker sky and less atmosphere around the meridian.

 

With my G11 with its RA extension, I image through the meridian before flipping.  I do all of my flips manually (using APT), as I do not trust the software to do the flip unattended.



#10 bigeastro

bigeastro

    Apollo

  • *****
  • Posts: 1,340
  • Joined: 20 Feb 2015
  • Loc: Southwest Florida

Posted 18 January 2020 - 12:18 AM

My head hurts.  What a year.  Congratulations, you learned in one year what it has taken me years and is still an ongoing progress.  I took me about six to ten year to learn the #1 Rule.   The most important thing for deep sky and wide field imaging, lesser issue for the later is to purchase the best mount you can't afford.  This will shave off years of frustration.  That is why it is #1 on the list below, the balance sometimes does not even seem to matter in relative terms.


Edited by bigeastro, 18 January 2020 - 12:20 AM.

  • SilverLitz likes this

#11 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 18 January 2020 - 07:37 AM

My head hurts.  What a year.  Congratulations, you learned in one year what it has taken me years and is still an ongoing progress.  I took me about six to ten year to learn the #1 Rule.   The most important thing for deep sky and wide field imaging, lesser issue for the later is to purchase the best mount you can't afford.  This will shave off years of frustration.  That is why it is #1 on the list below, the balance sometimes does not even seem to matter in relative terms.

Yes, the mount is VERY important for AP, and it needs to be added to the list.  That is a lesson I thankfully learned 2nd hand, much this from this forum early on.  My 1st and current mount is a G11. 

 

My initial scope choice was not as good a choice; I got an ED102CF after watching some of Trevor Jones excellent AstroBackYard.com videos (highly recommended).  The 714mm FL is a tweenier, too long for most nebulas and too short for most galaxies, and its focuser is seriously lacking.  Last spring, I corrected this by getting an Esprit 100, which I have used mostly with a 0.75x FR/FF at 413mm, f/4.13.  I need to add some of this, as well. It is implied in point #2, but a beginner will have to think about more deeply to understand what this means for scope FL and sensor size.

 

There is so much to add, and the list is already unwieldy.  I need to break the list down into subsections.


  • bobzeq25 likes this

#12 Madratter

Madratter

    Voyager 1

  • *****
  • Posts: 11,727
  • Joined: 14 Jan 2013

Posted 18 January 2020 - 10:40 AM

Here is something I learned early on that isn't on your list (that I could see).

 

There are a million reasons you can find to decide not to image on a night. Low transparency. Partly cloudy for an hour or two predicted. High Humidity. The neighbor next door is welding (happened to me). The ground is wet. The wind is high. The trees have leaves. The leaves are off the trees and blowing around. The Moon is out. The mosquitos are dive bombing anything that moves.

 

Instead of finding excuses not to image, find excuses to image. You won't get good at this unless you get out there and actually do it.

 

If you want to just collect imaging equipment, that is fine. It is your money. It is your hobby. I love fine equipment too. But if you actually want to be an imager, you have to get out there from time to time.

 

You can learn all those things on your list, but if you don't learn this, it won't amount to much.


Edited by Madratter, 18 January 2020 - 10:40 AM.

  • TimN, bobzeq25 and SilverLitz like this

#13 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 18 January 2020 - 11:39 AM

Leaving off differences in style/emphasis.

 

15.  Skyfog is sky brightness.  The cause is irrelevant.

 

34.  You'll almost always clip some stars high.  Adopting zero or few pixels at full well depth is an exercise in futility.

 

36, and in general.  Things like dark current have two components, an average value, and random variability around that average.  If you want to talk about "removing" such things, you need to differentiate.   See Woodhouse, who covers this extremely well.  Note that I have not used the n word, its use tends to produce much meaningless frenzy.

 

38.  Flats also correct for uneven pixel response.

 

40.  Guiding accuracy is reduced (in rough order of importance) by long focal lengths, poor seeing, weight, and skill at configuring the autoguiding program.   Flexure between the guide scope and main optical train is a problem (and a nasty one), and must be prevented by mounting the guide scope rigidly or using an off axis guider.

 

41.  Just wrong.  The main advantage of binning is to trade some resolution (which may be only theoretical) for improved signal to noise ratio.  It works for both CCDs and CMOS.  CCDs have an additional (smaller) advantage of reducing read noise by binning, CMOS does not have this added advantage.

 

Additions.

 

Unlike visual, the mount is more important than the scope.  Costs reflect that.

 

Things are almost always more complicated than you think.  The "best" this and that is usually the best personal compromise for an individual.

 

The best equipment and procedures for a beginner to learn AP with will most likely not the same as the best equipment and procedures for an experienced imager.  Run any advice you get here through that filter.

 

More total imaging time is more important than working on getting the subexposure time "better".  Subexposure time just needs to be in the ballpark.

 

If your budget is low, starting out with a camera and a lens is an excellent idea.  A scope adds cost (both money and time), fast.  The bigger the scope, the bigger the cost added.  It's not linear, things get out of hand fast.

 

Even when starting out, do not omit the camera calibration frames; bias, flats, darks.  Doing so will likely lead to bad habits in processing.  Processing is tough enough without having to unlearn bad habits.

 

Skill in processing is more important than what program you use.

Thanks!

 

I generally agree, but disagree with you on 41) Binning on CMOS.

 

My point is that there is for CMOS, the camera is not actually doing any hardware binning, it is a software averaging exercise, such as averaging 2x2 pixels into one value.  I expect this can be done in post and be just as effective (or more so), and give you the option to do it or not.  Since CMOS cameras are reading every pixel, there is no reduction in RN, in contrast to CCD.  I do not see how binning on CMOS can improve SNR, as both the aggregate signal and noise are the same.  The averaging will smooth out (blur) both noise and signal to make it aesthetically pleasing, whether on the camera or in PI/Ps. 

 

I have read that some cameras actually reduce bit-depth (resolution) when they bin.  (I think there is a mention of this in the ASI6200 thread in the CCD/CMOS forum, not that the ASI6200 does it.)  In this case, the quantization noise will be increased and SNR reduced.  Even if a 14-bit camera outputs 14-bit when binning (or 12->12, 16->16) , this exercise is NOT increasing the bit-depth that happens in PI/DSS during stacking multiple samples outputting a 32-bit result (not that there is a full 32-bits of info).  The camera's binned output would still be a 14-bit representation of the average of 4, 14-bit numbers, with extra quantization error added if this average did not come up to an exact 14-bit value.  Doing the 2x2 averaging in PI would not add any extra quantization error as the 32-bit depth has the resolution to encode this averaged value (in 0.25 of original camera LSB units).  In a computer numerical analysis exercise (with limited bit-depth), you want to keep as much precision as you can early in the process as precision is lost with various procedures, and the order in which the processes occur make a difference in the accuracy of the final result.


Edited by SilverLitz, 18 January 2020 - 11:42 AM.


#14 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 18,538
  • Joined: 27 Oct 2014

Posted 18 January 2020 - 12:11 PM

Thanks!

 

I generally agree, but disagree with you on 41) Binning on CMOS.

 

My point is that there is for CMOS, the camera is not actually doing any hardware binning, it is a software averaging exercise, such as averaging 2x2 pixels into one value.  I expect this can be done in post and be just as effective (or more so), and give you the option to do it or not.  Since CMOS cameras are reading every pixel, there is no reduction in RN, in contrast to CCD.  I do not see how binning on CMOS can improve SNR, as both the aggregate signal and noise are the same.  The averaging will smooth out (blur) both noise and signal to make it aesthetically pleasing, whether on the camera or in PI/Ps. 

 

I have read that some cameras actually reduce bit-depth (resolution) when they bin.  (I think there is a mention of this in the ASI6200 thread in the CCD/CMOS forum, not that the ASI6200 does it.)  In this case, the quantization noise will be increased and SNR reduced.  Even if a 14-bit camera outputs 14-bit when binning (or 12->12, 16->16) , this exercise is NOT increasing the bit-depth that happens in PI/DSS during stacking multiple samples outputting a 32-bit result (not that there is a full 32-bits of info).  The camera's binned output would still be a 14-bit representation of the average of 4, 14-bit numbers, with extra quantization error added if this average did not come up to an exact 14-bit value.  Doing the 2x2 averaging in PI would not add any extra quantization error as the 32-bit depth has the resolution to encode this averaged value (in 0.25 of original camera LSB units).  In a computer numerical analysis exercise (with limited bit-depth), you want to keep as much precision as you can early in the process as precision is lost with various procedures, and the order in which the processes occur make a difference in the accuracy of the final result.

Our disagreement is somewhat semantic, somewhat not.

 

Semantic - I agree binning in CMOS can be done well in software.  That was implied, it's good to have it explicit.

 

Not.  Binning always improves signal to noise ratio, just as larger pixels (which is basically what binning does) and larger (numerically) image scales do.  That, not the lesser advantage of lower read noise, is the main reason for binning.  Often people use the gain in snr to reduce the amount of total imaging time needed. 

 

You're trading resolution (theoretically, there may or may not be much of an actual loss, in various circumstances, for various reasons) for snr.  This trade is common when imaging. 

 

The above assumes all other things are equal.  If a specific camera reduces bit depth when binning, that's an issue with the camera, not with binning.  Even among CCDs, some cameras bin better than others.  As always, it's complicated.

 

Minor point.  CMOS  generally has lower read noise than CCD anyway.


Edited by bobzeq25, 18 January 2020 - 12:23 PM.

  • SilverLitz likes this

#15 Madratter

Madratter

    Voyager 1

  • *****
  • Posts: 11,727
  • Joined: 14 Jan 2013

Posted 18 January 2020 - 12:17 PM

I think the key words in SilverLitz's original post for #41 were on camera. The argument wasn't that binning won't improve S/N. The argument was that since there is no inherent advantage with CMOS (as opposed to CCD) to doing so on camera, it often makes sense to wait. For example, that way if you have an exceptional night of seeing, you haven't lost that potential resolution. Whereas if you bin on camera, that resolution is gone.


Edited by Madratter, 18 January 2020 - 01:02 PM.

  • bobzeq25 and SilverLitz like this

#16 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 18,538
  • Joined: 27 Oct 2014

Posted 18 January 2020 - 12:31 PM

I think the key words in SilverLitz's original post for #41 were on camera. The argument wasn't that binning won't improve S/N. The argument was that since there is no inherent advantage with CMOS (as opposed to CCD) to doing so on camera, it often makes sense to wait. For example, that way if you have an exception night of seeing, you haven't lost that potential resolution. Whereas if you bin on camera, that resolution is gone.

Completely agree, but the idea that "binning is useless with CMOS" is a very common beginner misunderstanding.  41 needs rewording to not  be on that bandwagon.  Better would be something like:

 

41.  Binning can be used, in certain circumstances, by both CMOS and CCD cameras.  With CCDs there's an additional (but smaller) advantage of reduced read noise if one bins in hardware.  With CMOS, there is no such advantage, binning in software works just as well as binning in hardware.


Edited by bobzeq25, 18 January 2020 - 12:32 PM.

  • SilverLitz likes this

#17 Peregrinatum

Peregrinatum

    Viking 1

  • *****
  • Posts: 842
  • Joined: 27 Dec 2018
  • Loc: South Central Valley, Ca

Posted 18 January 2020 - 12:36 PM

Great list... AP is such a wonderful pursuit, always so much to learn.


  • SilverLitz likes this

#18 WadeH237

WadeH237

    Fly Me to the Moon

  • *****
  • Posts: 5,312
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 18 January 2020 - 12:40 PM

Completely agree, but the idea that "binning is useless with CMOS" is a very common beginner misunderstanding.  41 needs rewording to not  be on that bandwagon.  Better would be something like:

 

41.  Binning can be used, in certain circumstances, by both CMOS and CCD cameras.  With CCDs there's an additional (but smaller) advantage of reduced read noise if one bins in hardware.  With CMOS, there is no such advantage, binning in software works just as well as binning in hardware.

I have a slightly different take on this.

 

In my opinion, there is exactly one reason for binning a CMOS camera, and that is to reduce the file size of the subexposures.  You can get the same S/N benefit by resampling the image during processing, and you have more control over how (and when, in terms of steps) that it works.

 

With CCD cameras, there is a definite advantage in binning at capture time.  The read noise of a single binned pixel is similar to the read noise of a single (unbinned) pixel, but the signal for a given exposure time is 4x.  Since we tend to set our minimum exposure times on beating read noise, this is not an insignificant advantage - especially when imaging behind filters.


  • SilverLitz likes this

#19 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 18 January 2020 - 01:17 PM

Semantically, I think of binning as being as being on camera, not in post, though the resolution reductions in post would have the same effect.

 

Instead of a pure file conversion into lower resolution files, I would expect convolution would work well, with more control of how much, smoother transitions, and the ability to mask out the effect in the high signal areas; reducing noise in the areas of low signal, but not reducing resolution in areas of high signal.  I have never done this, but it logically appeals to me.  Does it work in practice?

 

There will be large files, eating HD space, but at least the cost of storage is much lower than it used to be.  In anticipation, in the last 2 months, I have bought 4x 4TB and 2x 3TB HGST UltraStar HDDs, and a 8TB WD Essentials external drive; buying when prices are attractive.  Also, I have bought extra hot-swapable sliding HD caddies for my HP 840.



#20 bobzeq25

bobzeq25

    Hubble

  • *****
  • Posts: 18,538
  • Joined: 27 Oct 2014

Posted 18 January 2020 - 02:28 PM

Semantically, I think of binning as being as being on camera, not in post, though the resolution reductions in post would have the same effect.

 

Instead of a pure file conversion into lower resolution files, I would expect convolution would work well, with more control of how much, smoother transitions, and the ability to mask out the effect in the high signal areas; reducing noise in the areas of low signal, but not reducing resolution in areas of high signal.  I have never done this, but it logically appeals to me.  Does it work in practice?

 

There will be large files, eating HD space, but at least the cost of storage is much lower than it used to be.  In anticipation, in the last 2 months, I have bought 4x 4TB and 2x 3TB HGST UltraStar HDDs, and a 8TB WD Essentials external drive; buying when prices are attractive.  Also, I have bought extra hot-swapable sliding HD caddies for my HP 840.

No.  You are overlooking a lot here, as people here do a lot when constructing theoretical models in their head.  Important factors are often left off.  Why I questioned your original 41, it exhibited some lack of understanding.

 

Binning (in camera or not) effectively creates bigger pixels, and bigger pixels have fundamental advantages with regard to snr.  That's the main point, although there are (as always) details.
 


Edited by bobzeq25, 18 January 2020 - 02:30 PM.


#21 WadeH237

WadeH237

    Fly Me to the Moon

  • *****
  • Posts: 5,312
  • Joined: 24 Feb 2007
  • Loc: Snohomish, WA

Posted 18 January 2020 - 03:44 PM

And I have one more thought on binning, which is a bit of a peeve for me.

 

To me, binning is, by definition, done before analog to digital conversion is complete.  Combining pixels after that point is resamplingHere is one reference on binning that gives some good detail.  The affect on read noise is very much a part of the expectation when one does binning.

 

When the combination is done in software, it only confuses the subject to call it "binning".  As much as I would like to blame the newer camera manufacturers' marketing departments for this confusion, it's been going on for a long time.  MaxIm/DL has had a software "binning" function for at least as long as I have been using it, back at version 2 or so.  They may have even started it.

 

There are more than enough confusing concepts in astrophotography.  We don't help ourselves by misusing terminology to add even more. 


  • SilverLitz likes this

#22 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 18 January 2020 - 03:57 PM

Here is my 1st NB image, North America + Pelican Nebulas, shot with a Samyang 135mm camera lens, at f/2.8, and my ASI183MM-Pro. This have been significantly cropped as I had issues with the ZWO EOS adapter (which I used electric tape to firm up the "slop"), resulting in one of the nights camera framing to radically rotate.

 

NA_SHO (LoRes).JPG


Edited by SilverLitz, 18 January 2020 - 04:10 PM.

  • Madratter likes this

#23 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 18 January 2020 - 04:02 PM

Here is my M33 LHaRGB, shot with a Esprit 100 and TSAPORED075 FR/FF, at f/4.13, and my ASI183MM-Pro.  It was cropped closer to square as I collected data over more than a month, as my camera orientation changed, to fit other targets, as M33 was always my secondary target of the night.

 

M33_LHaRGB60 (LoRes).JPG



#24 ks__observer

ks__observer

    Apollo

  • *****
  • Posts: 1,125
  • Joined: 28 Sep 2016
  • Loc: Long Island, New York

Posted 18 January 2020 - 04:04 PM

Re #4 

Red is less susceptible to atmospheric extinction than shorter wavelengths.

Shoot red and Ha lower altitude and blue green Oiii at higher altitudes.


  • SilverLitz likes this

#25 SilverLitz

SilverLitz

    Viking 1

  • -----
  • topic starter
  • Posts: 502
  • Joined: 17 Feb 2018
  • Loc: Louisville, KY

Posted 18 January 2020 - 04:10 PM

Here is my M31 in HaRGB, shot with a Canon 70-200mm f/2.8L IS ii camera lens, at f/2.8 and 200mm, and my ASI183MM-Pro.

 

M31_200mm_LHaRGB_90_10 (LoRes).JPG


  • ks__observer likes this


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics






Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics