Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Shorten total integration time with new equipment

astrophotography equipment
  • Please log in to reply
133 replies to this topic

#26 Peregrinatum

Peregrinatum

    Surveyor 1

  • *****
  • Posts: 1,506
  • Joined: 27 Dec 2018
  • Loc: South Central Valley, Ca

Posted 15 August 2020 - 12:59 PM

Wow!  so I guess it really works... original image right after integration, then a clone of it with IntegerResample applied to downsample 2x, you can see the SNRWeights from SubframeSelector and how the binned image has twice the SNR... my original image scale was 0.34 a-s/pixel so this will be helpful for my over sampled imaging

 

Now my question is when is the best time to apply the downsampling?  So far I have seen that it can be performed after integration, after deconvolution, or right before stretching.... hmmmmmmmmmmm

 

jEO3XvNh.jpg



#27 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 15 August 2020 - 01:24 PM

I would think right after integration.  

 

To me, the idea is to start your post-processing with the best possible data.  More integration time in the field gives you better data, but binning can give you the same result with less time.  

 

Noise reduction and other algorithms used to improve your image, invariably have side effects.  It's always a struggle to keep from going too far, while still getting the desired improvements.  If you start with better data, you can use less aggressive processing to get the same result.

 

-Dan



#28 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 1,833
  • Joined: 09 Aug 2016
  • Loc: Lynnwood, WA, USA

Posted 15 August 2020 - 03:29 PM

SNR depends strongly on resolution. Improving focus and guiding will allow you to reach a fainter limiting magnitude quicker. That's the only accurate way I know of to measure SNR without a known reference image.

#29 WadeH237

WadeH237

    Fly Me to the Moon

  • *****
  • Posts: 6,113
  • Joined: 24 Feb 2007
  • Loc: Ellensburg, WA

Posted 15 August 2020 - 06:30 PM

Now my question is when is the best time to apply the downsampling?

I process my data at native resolution to the best of my ability.  I down sample as the very last step for presentation.


  • Peregrinatum likes this

#30 Peregrinatum

Peregrinatum

    Surveyor 1

  • *****
  • Posts: 1,506
  • Joined: 27 Dec 2018
  • Loc: South Central Valley, Ca

Posted 16 August 2020 - 07:59 AM

So interesting, such a great hobby!

 

I suppose now I am wondering how far can I take this downsampling to increase SNR?  3x, 4x, ??????

 

the image scale with the C925HD is 0.34 a-s/pixel, way over sampled for my Bortle 7 skies with typically average seeing... at what point does increased downsampling start to diminish the image?



#31 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 16 August 2020 - 09:16 AM

The Bortle scale is for light pollution so it doesn't really affect sampling.  On the other hand, seeing does.  

 

Here's a good calculator than can tell you if your pixel scale is correct, given your scope, seeing conditions, pixel size, binning, etc.

 

https://astronomy.to...ccd_suitability

 

Not sure what camera you have, but my guess is you'd be ok at 2x or 3x binning.

 

-Dan



#32 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 1,833
  • Joined: 09 Aug 2016
  • Loc: Lynnwood, WA, USA

Posted 16 August 2020 - 03:04 PM

SNR of an image by itself is not well defined without a reference image. SNR of an individual star is well defined because the source object (reference image) is known. If you want to know your SNR on a meaningful way you need to measure a star of known brightness. You measure the star adu counts within a given aperture and you measure the adu counts on an annual ring around the star and you get a direct SNR measure. A plot of snr versus magnitude, for a given exposure time, accurately characterizes the performance of your system.

#33 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 16 August 2020 - 03:08 PM

SNR of an image by itself is not well defined without a reference image.

He's not trying to determine the SNR of an image by itself.  If you look at the graph and data above, he is comparing it to a reference image.  His goal is to compare the SNR before and after binning, so the reference image is the "before" image.

 

-Dan



#34 555aaa

555aaa

    Vendor (Xerxes Scientific)

  • *****
  • Vendors
  • Posts: 1,833
  • Joined: 09 Aug 2016
  • Loc: Lynnwood, WA, USA

Posted 16 August 2020 - 03:15 PM

You can't compare the snr of the binned image to the original because it contains one quarter of the information content
- I probably should step out of this thread because it is too controversial. For the OP my experience with fast scopes in light pollution is bad but good in dark skies. Light polluted skies support slower but optically better systems. Ciao.

#35 Rasfahan

Rasfahan

    Sputnik

  • -----
  • topic starter
  • Posts: 44
  • Joined: 12 May 2020
  • Loc: Hessen, Germany

Posted 16 August 2020 - 03:23 PM

You can't compare the snr of the binned image to the original because it contains one quarter of the information content
- I probably should step out of this thread because it is too controversial. For the OP my experience with fast scopes in light pollution is bad but good in dark skies. Light polluted skies support slower but optically better systems. Ciao.

Sorry that you want to step out, I think your input could be valuable.

 

I, too, had understood that with a "fast" camera-scope system (high etendue, not necessarily low f-value), I would be able to decrease the amount of time needed to overcome the light pollution in my area with sufficient object signal. It has been my experience, that for extended objects, I need far less integration time on my f/4.5 61EDPH than on my f/8 RC8 in Broadband with the same camera. Of course, I am interested in your practical experience concerning fast and slow optics in light polluted skies. What were the problems with the fast optics? Was it a problem with the aperture, f-value or the general build quality? I do not think any of the scopes I listed in the initial post to have optically really bad quality (although I have seen EdgeHD pictures with slight CA).

 

I am not sure I understand what you mean with "information content", please explain. So far, I had assumed that, due to seeing at my place (between 2" to 3" RMS, judging from guiding graphs and star FWHM) there would be no sense in sampling with 0.48 arcsec/pixel as I do unbinned with the ASI 1600. Instead, 0.96 arcsec/pixel should sample nicely a 3" RMS seeing. I am not quite sure how the binned picture contains less information.

 

Concering SNR: I am aware that the Pixinsight "SNRWeight" is only a measure for the SNR, which cannot be directly measured without photometry. But I had understood that a comparison of two images with regard to SNRWeight is valid even between different optics/sensors.

 

If I misunderstood the concepts, feel free to recommend literature, I, so far, have found it difficult to read up on the mathematical foundations of our hobby, maybe I am looking in the wrong places. Most literature seems to take a "hands on" approach rather than teach the basic optics/physics.


Edited by Rasfahan, 16 August 2020 - 03:37 PM.


#36 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 16 August 2020 - 03:26 PM

You can't compare the snr of the binned image to the original because it contains one quarter of the information content
- I probably should step out of this thread because it is too controversial. For the OP my experience with fast scopes in light pollution is bad but good in dark skies. Light polluted skies support slower but optically better systems. Ciao.

Well, I can't speak for anyone else, but I'm here to learn.  I've done some testing with my gear, but I'm certainly no expert on this.

 

I'm looking into buying an ASI6200 to use with my EdgeHD 8, and doing a 2x bin. I'm assuming I'll get a 2x increase in SNR compared to not binning because of the things I've read and because of the tests I've done.  

 

But, if there are other factors to consider I'd love to know about it.  I don't really want to spend that kind of money only to be disappointed in the results.  So if you have other information that might help my decision, I'm all ears. :-)

 

-Dan



#37 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 16 August 2020 - 03:42 PM

I am not sure I understand what you mean with "information content", please explain. So far, I had assumed that, due to seeing at my place (between 2" to 3" RMS, judging from guiding graphs and star FWHM) there would be no sense in sampling with 0.48 arcsec/pixel as I do unbinned with the ASI 1600. Instead, 0.96 arcsec/pixel should sample nicely a 3" RMS seeing. I am not quite sure how the binned picture contains less information.

Not sure if he'll step back in, but I'll throw my 2 cents in here.

 

Keeping in mind that both signal and noise are "information", when you bin 4 pixels together, you lose 4 pieces of information and replace them with one.  So you are indeed losing information.

 

But my take is, that's exactly the point of binning.  As you say, you're not really losing signal detail because you're already oversampling.   The desirable signal is spread out, "blurred" if you will, over many pixels anyway.  On the other hand, the noise does show detail at the individual pixel level.  If you zoom in, you can see distinct variations from one pixel to the next.

 

So when you bin, you average 4 pixels together and essentially smooth out, or blur, the noise to some degree.  But since the signal is already spread out too much, you aren't really losing detail there.

 

-Dan



#38 Rasfahan

Rasfahan

    Sputnik

  • -----
  • topic starter
  • Posts: 44
  • Joined: 12 May 2020
  • Loc: Hessen, Germany

Posted 16 August 2020 - 03:57 PM

Not sure if he'll step back in, but I'll throw my 2 cents in here.

 

Keeping in mind that both signal and noise are "information", when you bin 4 pixels together, you lose 4 pieces of information and replace them with one.  So you are indeed losing information.

 

But my take is, that's exactly the point of binning.  As you say, you're not really losing signal detail because you're already oversampling.   The desirable signal is spread out, "blurred" if you will, over many pixels anyway.  On the other hand, the noise does show detail at the individual pixel level.  If you zoom in, you can see distinct variations from one pixel to the next.

 

So when you bin, you average 4 pixels together and essentially smooth out, or blur, the noise to some degree.  But since the signal is already spread out too much, you aren't really losing detail there.

 

-Dan

Hi, Dan,

 

yes, I understand that. I was referring to "information content" as a bit of a vague statement in the context. I studied some computer science long ago, but from what I (dimly) recall, in our case information transmittance (channel capacity, if you consider light from distant objects to be information) is actually linked to SNR (Shannon-Hartley-Theorem). This is why I asked about what he meant with "information content". I am sure there are different definitions in other disciplines (and also, I have since trained and worked for 20 years as a physician, not sure I remember anything correctly). I am really looking for a book that goes into the gritty details on all that.



#39 Peregrinatum

Peregrinatum

    Surveyor 1

  • *****
  • Posts: 1,506
  • Joined: 27 Dec 2018
  • Loc: South Central Valley, Ca

Posted 16 August 2020 - 05:51 PM

Interesting, so I loaded in the data for my ASI1600MM C925HD this is what it determined:

 

https://astronomy.to...ccd_suitability

 

"The ideal pixel size for OK Seeing (2-4" FWHM) seeing is: 0.67 - 2" / pixel.

This combination leads to slight over-sampling. Will require a good mount and careful guiding."

 

So at 0.34 as/pixel I could easily bin 3x and be OK... I'm working on some data now, I will play around with the 2x and 3x and see which looks best.



#40 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 16 August 2020 - 06:43 PM

So here's some more information to throw into the mix.  With my testing, I found that binning does indeed improve SNR by 2x - but only in certain circumstances.  It appears that if you use the VNG Debayer method in PI, and then bin with the IntegerResample process, it works.

 

However, I tried this on the same image preprocessed in DSS and it produced odd results.  First, the SNR of the full resolution image was far better than either the full res or binned image from PI.  And second, applying IntegerResample to bin it did NOT increase the noise by 2x.  In fact it was a very small increase.

 

Since DSS uses a bilinear Debayer method, I tried that method in PI and got results very close to the DSS results:

Screen Shot 2020-08-16 at 7.29.15 PM.jpg

 

So what's going on?  I looked at the images and zoomed way in.  I could see that in the Bilinear Debayered images, they looked somewhat blurred compared to the VNG Debayered image.  Binning is essentially a blurring algorithm as well - you average 4 pixels together to get a result, which reduces the detail and the noise.  But the reason it works is because the noise is very fine grained and varies at the pixel level.  Larger scale noise will be far less affected by a 2x2 bin.  If you've already blurred the image by using a Bilinear Debayer, you've obscured most of that fine grained noise already.  That produces a higher SNR to begin with.  But it also means that binning will have little effect on SNR because there is no fine grained noise left for it to reduce.

 

What does this mean in terms of a choice of scope and camera?  I'm not entirely sure, but it sure muddies the waters!  I was looking at getting a 10" f/4 imaging newt as a galaxy scope.  But I already have an EdgeHD 8 SCT with an APS-C camera.  I was hoping that, instead of a whole new scope & camera system, I could get an ASI6200 full frame camera which would give me around a 0.7" pixel scale at binx2, and a significantly larger field of view so I could use it without a FR (the FR causes unacceptable image problems). And I figured that binning would give me a 2 stop improvement in SNR so I could keep my integration times lower.  However, I'm already using DSS for preprocessing, so it looks like a 2x bin will only get me a slight SNR improvement.  

 

Hmmm ... that imaging newt is starting to look good again.

 

-Dan

 

 



#41 rgsalinger

rgsalinger

    Fly Me to the Moon

  • *****
  • Moderators
  • Posts: 7,143
  • Joined: 19 Feb 2007
  • Loc: Carlsbad Ca

Posted 16 August 2020 - 07:15 PM

Welcome to the party. This is exactly why I broke the bank and bought a CDK12.5" scope from Planewave. I knew that big chips were coming and I knew I wanted round stars in the corners. So, I plunked down the money (prices have been reduced since then) and bought the scope of my dreams. Here's an example of what it produces with a full frame chip at .6 arc seconds sampling per pixel. 

 

cem_120_0301.JPG

 

 

Night after night this is what I get in the corners. Remember, though, galaxies do not cover the full frame. That's why an SCT can be a great galaxy scope despite the covering. If you look at spot size diagrams they tell the tale. Not having a moving mirror able to tilt with gravity is a big plus as well. 

 

Rgrds-Ross



#42 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 16 August 2020 - 07:21 PM

That's a sight to see, Ross! :-) Of course, that Paramount MX+ is part of the equation too.  $$$$. Lol.  I thought I was getting ridiculous thinking about spending $4K on a camera.  But $8500 for the scope, $10K for a mount, and still $4K for the camera?  Wowser! 

 

I understand that you get what you pay for ...  but you have be able to pay for it! ;-)

 

-Dan


  • rockstarbill likes this

#43 sn2006gy

sn2006gy

    Viking 1

  • ****-
  • Posts: 915
  • Joined: 04 May 2020
  • Loc: Austin, TX

Posted 17 August 2020 - 09:29 AM

Binning for correct sampling modifies the SNR relationship for the desired sampling rate. We have what we call "correct sampling" as someone mentioned - so if you bin to achieve that "Correctly sampled image" then you're not doubling your SNR per se by camera magic, you're just correctly sampled and at correct SNR for your sampling.

 

In order to achieve desired SNR when over sampled you have to sample more...  in general... but....  

 

The "Weird thing" with the 6200 is that in comparison to the mentioned KAF is that you can be over sampled and NOT punished in the same relationship/context that seems to be compared here because the 6200 is so darn clean.  Very low readout noise, deep well, no dark current to worry about, no amp glow - these cameras favor taking more and more shorter exposures and increasing your SNR in total integration time.  With a CCD you often binned to overcome SNR because of HIGH read noise and HIGH dark current and HIGH amp glow and in order to still have 9 minute subs. So you binned 2x2 in order to work with those constraints.  So while you may drizzle integrate to bin 2x to recover resolution and be appropriately sampled, there are people successfully being over sampled and drizzle-integrating exponentially more smaller frames and getting some astonishing level of detail for the same total integration time when compared across cameras. 

 

 

So what's going on?  I looked at the images and zoomed way in.  I could see that in the Bilinear Debayered images, they looked somewhat blurred compared to the VNG Debayered image.  Binning is essentially a blurring algorithm as well - you average 4 pixels together to get a result, which reduces the detail and the noise.  But the reason it works is because the noise is very fine grained and varies at the pixel level.  Larger scale noise will be far less affected by a 2x2 bin.  If you've already blurred the image by using a Bilinear Debayer, you've obscured most of that fine grained noise already.  That produces a higher SNR to begin with.  But it also means that binning will have little effect on SNR because there is no fine grained noise left for it to reduce.

 

Isn't this why people drizzle integrate?  Dither & Drizzle to re-compute some of the lost resolution for those intentionally under sampling for increased SNR for xyz reasons??  

 

I am interested in seeing what QHY does with their QHY600. The onboard FPGA gives a lot more flexibility to do cool things like the double readout they're testing... all of which again seems to go against a 1:1 comparison of SNR vs Sampling and their relationship to CCD vs CMOS especially as it relates to total integration time and telescope performance.

 

To loop it back around to topic and "photon performance" though, binning a 6200 to "double snr" isn't increasing the performance of anything... you're just correctly sampled. A camera with correctly sized pixels would have appropriate SNR if it was a fast CMOS like the 6200 chip.  So in essence, aren't we talking about the performance of these new chips and not the performance of binning? Just so happens to be you can bin and over sampled camera to make it work but the performance of that camera is comparisons to others should be on the camera merits not on binning.

 

The KAF is an awesome camera, amazing FOV. It's correctly sampled natively for the scopes discussed here... i find it curious that the 6200 seems to be in comparison to how the KAF works vs how it stands on its own.

 

With all this said.. someone could wayyyy under sample the 6200 and drizzle-integrate and probably come out with beautiful pictures.. the camera is that good...  but you're not doubling/tripling SNR for free, you're reducing resolution and receiving the same amount of signal over fewer effective pixels. The universality of the camera is nice. So i get that. Then again, if you buy a planewave, i doubt you're buying a camera because you move it between devices, it's probably a dedicated set up smile.gif

 

In the end, If you want to image more objects or reduce total integration time.. to me.. you move to darker skies. The gear referenced here is all absolutely amazing and will create stunning images. If you dither & drizzle to correct for sampling too then you're spending time not imaging... 

 

i guess in the end, its not 1:1 like the question seems to imply. I feel like we'd be telling half-truths if we said "yeah sure bin 2x2 double your snr its great" - well, yeah.. but you're just correctly sampling now not doubling anything. Your effective doubling is the scope/camera combination but the total integration time in reality won't be doubled or halved because the comparison against "What" isn't 1:1  - that doubling is also only "more true" in dark skies than anything else.

 



#44 Midnight Dan

Midnight Dan

    Voyager 1

  • *****
  • Posts: 13,682
  • Joined: 23 Jan 2008
  • Loc: Hilton, NY, Yellow Zone (Bortle 4.5)

Posted 17 August 2020 - 10:08 AM

Hi Byron:

 

I haven't done anything with drizzle so I don't really know how that relates to binning.

 

 "yeah sure bin 2x2 double your snr its great" - well, yeah.. but you're just correctly sampling now not doubling anything.

 

Of course.  The whole point here is to correctly sample.  You want to use pixels that are as large as possible to collect as much light per pixel as possible, but not so large that you're undersampling.

 

You can get those correctly sized pixels by buying a camera with the right size pixels.  But if one doesn't exists, you might be able to get to that desired pixel size by binning a camera with smaller pixels. The question being explored here is - when you bin those smaller pixels, do you get the same SNR as if you were using the larger pixel camera?

 

The problem with smaller pixels is you need more exposure time to get the same signal level per pixel as with larger pixels.  But if binning is effective, you can reduce your exposure time to the same as you'd use with that larger pixel camera. 

 

In my particular case, I'd like to use my EdgeHD8 at f/7 with a focal reducer and my APS-C sized sensor ASI071 camera.  With that setup, I get 0.67"/pixel which is about the right sampling for my skies.  Unfortunately, I've found that the FR produces unacceptable aberrations.  And imaging at f/10, with that same camera, means painful exposure times, and oversampled results.  Binning that camera puts the pixel scale larger than I want.

 

So, one solution I was considering was to ditch the FR, and move to a full frame sensor.  The FOV at f/10 is about the same as at f/7 with the APS-C sensor.  And if I can get to the same pixel scale, I should have the same etendue per pixel, so the same exposure time and SNR - in theory.  Binning the ASI6200 pixels gets me to 0.73"/pixel, which is pretty close to what I want.

 

The question I was trying to answer for myself was: at f/10 will 2x-binned pixels with the ASI6200 FF sensor get me about the same performance as my ASI071 ASP-C sensor at f/7.  And by performance I mean exposure time and resulting SNR.

 

-Dan



#45 sn2006gy

sn2006gy

    Viking 1

  • ****-
  • Posts: 915
  • Joined: 04 May 2020
  • Loc: Austin, TX

Posted 17 August 2020 - 11:02 AM

Hi Byron:
 
I haven't done anything with drizzle so I don't really know how that relates to binning.


Drizzle helps you recover effective resolution from binning in order to work around performance issues with CCD or CMOS sensors that have noise or sampling that you don't want to overcome with horrendously long exposure times. This is binning typically done to overcome CCD issues and therefore undersample.

(only half the story... it varies between NB/LRGB/RGB/OSC.. people bin 2x2 on rgb in monos because detail is L, but some people don't shoot l and shoot unbinned RGB and extract L... so its a mix.. but whatever heh)
 

You can get those correctly sized pixels by buying a camera with the right size pixels.  But if one doesn't exists, you might be able to get to that desired pixel size by binning a camera with smaller pixels. The question being explored here is - when you bin those smaller pixels, do you get the same SNR as if you were using the larger pixel camera?


This can only be compared if there was the same sensor in the comparison. If they come out with a chonky modern bigger pixel camera with same specs as 6200 in read/noise/error/well/adc, we can compare this.

BUT...

To answer it in the comparison matrix as described in the 6200 vs the KAF - the sampling concern isn't 1:1


The 6200 is "faster" in every regard to the mentioned KAF (or your 071) in this topic regardless of sampling. 6200 is faster under/over/correctly sampled.


Appropriately sampled the KAF will have a bigger FOV for sure... (per the OP's messages, not yours)

more below in response to your specific camera
 

In my particular case, I'd like to use my EdgeHD8 at f/7 with a focal reducer and my APS-C sized sensor ASI071 camera.  With that setup, I get 0.67"/pixel which is about the right sampling for my skies.  Unfortunately, I've found that the FR produces unacceptable aberrations.  And imaging at f/10, with that same camera, means painful exposure times, and oversampled results.  Binning that camera puts the pixel scale larger than I want.
 
So, one solution I was considering was to ditch the FR, and move to a full frame sensor.  The FOV at f/10 is about the same as at f/7 with the APS-C sensor.  And if I can get to the same pixel scale, I should have the same etendue per pixel, so the same exposure time and SNR - in theory.  Binning the ASI6200 pixels gets me to 0.73"/pixel, which is pretty close to what I want.
 
The question I was trying to answer for myself was: at f/10 will 2x-binned pixels with the ASI6200 FF sensor get me about the same performance as my ASI071 ASP-C sensor at f/7.  And by performance I mean exposure time and resulting SNR.


The advantage of the 6200 is the 16bit adc, the 50k well, the astonoshingly low read noise, the no amp glow the much higher QE. The cost of over sampling with the 6200,2600 and 533 isn't the same as over sampling on anything else.


By switching to the 6200 in your case, you would choose to image more subs, shorter subs and you would have reduced overall exposure time because of the sensor regardless of binning. The context of binning should just be on appropriate scale but I believe that scale factor for correctly sampling also changes with the 6200.


I feel this topic drifted a bit. The OP asked about "shortening integration time with new equipment" and I guess my answer is that the "shortening" isn't because of Binning and its relationship to SNR on paper, but the actual benefits of the 6200 CMOS ESPECIALLY in comparison to the KAF which is correctly sampled for the scopes in question.

The question seemed asked in a way to do more in a single night.

My honest answer still says that any benefit of SNR and any benefit of the 6200 or being appropriately sampled improves SNR over the Kaf or over the 071 regardless of binning.


With the 6200 you're going to take more exposures
You're going to take shorter exposures
You may even need LESS exposures - especially compared to a KAF or your 071 for the SAME signal as before...

But why stop there? Why not use the 6200 as it stands on your own


I dunno where this is even going anymore... I guess i'm still talking about the OP's post and some of the responses to it on why the KAF is so great.. it is a great camera - but the reasons for the 6200s performance and value isn't its ability to bin for sampling and associated doubling of SNR (which is just being correctly sampled) because there isn't a comparible camera for the correct sampling - it's the ability for it to shine on its own and not be held to the same reasons binning exists for SNR that seem to be held there as a fact of life in CCD cameras.


like i mentioned before, you would probably have an increased SNR on the 6200 even being over sampled over the 071 because the sensor on the 6200 is that much better and you can overcome softness is over/undersampling by integration time, more exposures, dithering and drizzling.

You can push for odd extremes.

The camera is so amazing that "lucky imaging" and drizzling techniques change from a concern of specific sub SNR to total integration SNR and trade offs in integration computational time vs actual sub/stack time.


So yeah.. get correctly sampled.

6200 in your case would beat the 071 in apples/apples comparison at native or binned resolutions.


I think the 9.25 and imaging at f10 and how to make that work deserves its own topic as there is a whole slew of things that are different with that set up vs the ones referenced in the OP. You would have improved SNR with the 6200 but it wouldn't be some SNR that improves how many objects you can shoot in a single night, but better results of what you get in total integration.

Edited by sn2006gy, 17 August 2020 - 11:20 AM.


#46 rockstarbill

rockstarbill

    Cosmos

  • *****
  • Posts: 7,564
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 August 2020 - 11:19 AM

Sorry, perhaps this is a nitpick, but can you please be more specific when you say "the KAF"? There are a number of different OnSemi/Kodak sensors that start with KAF, which all have different performance characteristics, sizes, etc. I assume you are talking about the 16803, in which case it's probably better to call it the 16803 so people know exactly what you're comparing.

If you're talking about the 16200, then a number of things you said above are just flat out wrong.

Edited by rockstarbill, 17 August 2020 - 11:19 AM.


#47 sn2006gy

sn2006gy

    Viking 1

  • ****-
  • Posts: 915
  • Joined: 04 May 2020
  • Loc: Austin, TX

Posted 17 August 2020 - 11:36 AM

Sorry, perhaps this is a nitpick, but can you please be more specific when you say "the KAF"? There are a number of different OnSemi/Kodak sensors that start with KAF, which all have different performance characteristics, sizes, etc. I assume you are talking about the 16803, in which case it's probably better to call it the 16803 so people know exactly what you're comparing.

If you're talking about the 16200, then a number of things you said above are just flat out wrong.


In relationship to the topic, i don't think it matters which KAF is mentioned. The 6200 would be faster.


"Shorten total integration time with new equipment"

That in no way diminishes the value of the KAF sensors.

I'm curious what is flat out wrong becuase I just compared the sensors.

performance wise:

6200 has much lower read noise
6200 has deeper well
62000 has less dark current
6200 has USB 3

in relationship to total integration on top of the above

6200 has no amp glow

lastly, just to have fun with this, the 6200 will be done dithering and on to next sub before the kaf has even finished downloading a sub. smile.gif


BTW, this is exactly why i believe we've hit a point where CMOS and CCD stand on their own and while techniques used in both may have similar processes they achieve different results/goals/targets these days.

Edited by sn2006gy, 17 August 2020 - 11:39 AM.


#48 rockstarbill

rockstarbill

    Cosmos

  • *****
  • Posts: 7,564
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 August 2020 - 11:39 AM

lastly, just to have fun with this, the 6200 will be done dithering and on to next sub before the kaf has even finished downloading a sub. smile.gif

 

The KAF16200 I have downloads frames in the same amount of time. The KAF8300 I have downloads frames faster. That is the real point I am trying to make. "The KAF" is not clear enough. 



#49 sn2006gy

sn2006gy

    Viking 1

  • ****-
  • Posts: 915
  • Joined: 04 May 2020
  • Loc: Austin, TX

Posted 17 August 2020 - 11:46 AM

The KAF16200 I have downloads frames in the same amount of time. The KAF8300 I have downloads frames faster. That is the real point I am trying to make. "The KAF" is not clear enough.


My experience with KAF was downlod times were always abysmal. Maybe that has improved?? dunno, but i can basically do 1 second exposures on a 6200/2600/533 and with a decent computer achieve the near 1fps that these cameras say they can do.Never seen that on a kodak.

BUT, i'm less concerned with download times.. i can work around that.

In sub for sub comparison, I wouldn't take 9 minute subs with a 6200 as i would with a KAF because i'm not taking long subs to overcome read noise.

In a sub for sub comparison, I'd take MORE shorter subs and work towards integration time in total

I'd do things differently.

sure, if you treat the 6200 like a KAF and default to 9-15 minute subs you can.. (then a lot of its benefits are out the door) but why?


I have no skin in this battle other than pointing out how modern CMOS is different than CCD.

Edited by sn2006gy, 17 August 2020 - 11:51 AM.


#50 rockstarbill

rockstarbill

    Cosmos

  • *****
  • Posts: 7,564
  • Joined: 16 Jul 2013
  • Loc: Snohomish, WA

Posted 17 August 2020 - 11:58 AM

My experience with KAF was downlod times were always abysmal. Maybe that has improved?? dunno, but i can basically do 1 second exposures on a 6200/2600/533 and with a decent computer achieve the near 1fps that these cameras say they can do.Never seen that on a kodak.

BUT, i'm less concerned with download times.. i can work around that.

In sub for sub comparison, I wouldn't take 9 minute subs with a 6200 as i would with a KAF because i'm not taking long subs to overcome read noise.

In a sub for sub comparison, I'd take MORE shorter subs and work towards integration time in total


I have no skin in this battle other than pointing out how modern CMOS is different than CCD.

Which 8300 were you using? The FLI and newer SBIG cameras download frames in less than 1 second. The FLI 16200 (which is the outlier in terms of performance) is ridiculously good. 

 

https://www.astrobin.../full/380925/J/

 

These are 5 minute narrowband subs with the FLI ML16200 and I would take similar sub exposures with the 6200 I have. You are making these cameras out to have like 30e of noise or something, which just is not the case. The FLI 16200's have 5-6e noise with a 6um pixel size and when cooled have negligible levels of dark current. They are still exceptional cameras and the new Sony chips dont change that at all. The FLI ML16803 has a 100k full well and 8e noise with 9um pixels. The frames are monstrously large (36x36 IIRC) and download within a reasonable amount of time for such large frames. That camera is still rocking around 14 stops of DR.

 

Cost is what I would hang on the CCD cameras. The cost of them is much higher than similar sized CMOS chips. Lets not act like everyone with a CCD is using KAI1100 chips though. Those you can beat on for high noise, high dark current, etc. 




CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: astrophotography, equipment



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics