•

# Trying to understand FL, camera pixel size and image details

25 replies to this topic

### #1 miwitte

miwitte

Apollo

• topic starter
• Posts: 1092
• Joined: 28 Dec 2016
• Loc: Charlotte NC

Posted 22 June 2018 - 03:05 PM

After seeing some wonderful pictures the last few weeks here from folks with various scopes with longer FL but using the 1600mm I was trying to figure out how pixel size resolution and focal length all go together(Since I am looking to get a bigger scope and reuse my camera and mount to gain resolution.)

Bracken had a great write up in his book hopefully I get this right;

Typical good seeing is for most of us is 2 arc-sec so Nyquist say we want to sample at 2x of what we are trying to reconstruct  so we will use 1 arc-sec for sampling. Any combo of focal length and sensor pixel size larger is over sampled and you lose resolution, anything smaller is under-sampled and is fine up to 1/2 of the sampling number based on seeing.

So we want our image scale to be .5-1 arc-sec . Smaller than .5 arc-sec you just don't get the benefits of the larger focal length but it doesn't hurt you.

Using the formula Image Scale=(206.265*pixel size)/Focal Length will tell if we are over sampled, in the .5-1 arc-sec range or need larger pixels to see the full benefit of the larger scope.

Using the ZWO 1600mm at 3.8um pixel size 206.265*3.8= 783 so for any focal length smaller than 783, I am over-sampled and losing details.

Calculating my current SV80ST(480MM F/6) with .8 reducer(384MM) 783/384= 2.04. So this means I am way oversampled and losing significant resolution!(if my seeing gets to 2 arc sec)  I wasn't aware until I just did this!

the 8" edge HD is native 2125 MM FL so that gives me a .368 under-sample so I wont get the full benefit unless I get a camera with bigger pixels

the 8" edge HD  with .7 reducer is 1487 MM FL so that gives me a .52 under-sample so that's a sweet spot. The question is what do I gain or lose imaging at native focal length VS reducer( I know ill need more time due to F/10 Vs F/7 and guiding will be harder. But do I lose any detail or magnification with my current ZWO?)

the 8" edge HD  with hyperstar=390 MM FL so that gives me 2.04. So this means I am way oversampled again.

So to sum it up if I go native 8 edgeHD I'll need bigger pixels to get full benefits, use the .7 reducer I'm in a sweet spot, and hyperstar gets me back to what my stellarvue is capable of due to the oversampling. So I would need two more cameras to get full benefits of the longer EdgeHD, my stellarvue or using hyperstar.

I think I understand this now but please correct if I've overlooked something. This at least give me a range of scopes to look for but the Edge HD seems to fit the bill.  I am trying to stay in the 2-3k range with reducers and focuser here so any other suggestions on imaging platforms are welcome. I've read a lot of posts on the RC's and seems like a lot of collimation issues with the cheaper ones that I'm not sure I want to deal with that. Looking into imaging refractors as well as I'm not sure the hyperstar adds much to what I am getting with the Stellarvue already other than less integration time due to F/2(I'm at F/4.8 with the ST80 and reducer)

### #2 mistateo

mistateo

Apollo

• Posts: 1236
• Joined: 02 Feb 2017
• Loc: San Diego, CA

Posted 22 June 2018 - 03:14 PM

Your usage of the terms "under sampled" and "over sampled" appear to be reversed in your post.  The 183 sensor seems to be a good match for shorter focal length for imaging resolution, but not sure about the other pros/cons of that camera.

• RedLionNJ likes this

### #3 dhaval

dhaval

Vendor

• Posts: 1649
• Joined: 21 Jul 2008
• Loc: Round Rock, TX

Posted 22 June 2018 - 03:15 PM

I don't think over-sampling leads to loss of resolution, on the contrary, it helps gain resolution (by resolution, I mean detail). Ideally, you should always over-sample and not under sample. Oversampling does mean you need very good mount to ensure appropriate tracking, but other than that, there is more information stored in oversampled images than undersampled ones.

CS!

Edited by dhaval, 22 June 2018 - 03:16 PM.

### #4 bobzeq25

bobzeq25

Hubble

• Posts: 16396
• Joined: 27 Oct 2014

Posted 22 June 2018 - 03:28 PM

People argue sampling all the time.  I don't think you need to get too hung up about it.  It's just not that precise a subject, see the wide ranges below.  You can play around with scale with drizzle and binning.  There's always a tradeoff between resolution and signal to noise ratio, and both are important in producing appealing images.

Bracken recommends 1-2.  That's where I go, but it's because my skies are pretty bad.  Note that there's absolutely nothing wrong with widefield images at 2, or even more.

0.5 is really pushing it in my opinion.  In the 100 Best Astrophotography Targets, he's generally between 0.6 (on the smallest targets) and 2, larger with some camera lens widefield stuff.

Increased speed really helps a lot, particularly in light pollution.   If there's a suitable reducer for the Edge8, I'd consider thinking you'll generally use it, with maybe unreduced for some very small targets.

It's _all_ target dependent.  The one thing you'll be missing with the Edge is 1000mm focal length, a very useful length for many targets.

I agree with staying away from the cheap RCs.

Last point.  What's your guiding RMS like?  Images with scales below your guiding RMS are pretty dubious.  Some people suggest you should be guiding 2x or more better than your image scale.  See this thread.  The mount which works great at 480mm may not be so great at 2000mm.

https://www.cloudyni...mpared-to-good/

Bottom line.  A lot goes into good images.  Image scale is a useful concept, but things are complicated, and you can have tunnel vision here.  Many do.

Edited by bobzeq25, 22 June 2018 - 03:38 PM.

• ks__observer likes this

### #5 AndrewXnn

AndrewXnn

Mariner 2

• Posts: 291
• Joined: 29 Aug 2015
• Loc: Mexico, NY

Posted 22 June 2018 - 03:31 PM

Generally, between 1-2 arc-sec/pixel is ideal.

<1 arc-sec is oversampling, which will result in bloated stars and tracking difficulty.

>2 arc-sec is undersampling which results in blocky stars... stars look like squares.

Edited by AndrewXnn, 22 June 2018 - 03:34 PM.

### #6 TOMDEY

TOMDEY

Skylab

• Posts: 4240
• Joined: 10 Feb 2014
• Loc: Springwater, NY

Posted 22 June 2018 - 03:53 PM

There's no one right answer:

Regarding over/critical/under sampling... the engineers/scientists and ops people argue over this constantly. Google the parametric called "Q" for good discussions on that. For imaging satellites, it is especially important, because Max Information is the name of the game. And Q in the neighborhood of unity is almost always best for maximization of information throughput in the dynamic imaging stream. Maximizing DESIRED information may affect that decision, though. If it is plain resolution, contrast, sensitivity, beauty, actionability... that floats your boat... you may favor lower of higher Q. For example, someone imaging the moon would probably benefit oversampling, a troop with a crosshair on his back would favor undersampling/actionability.  Tom

### #7 Jon Rista

Jon Rista

ISS

• Posts: 23349
• Joined: 10 Jan 2014

Posted 22 June 2018 - 05:28 PM

Maybe this will help:

Exactly sampling (1x sampling) a star would mean fitting the star within the pixel. Even if you put the star on the border between two pixels, or at the intersection between four pixels, you still end up with a square star. Deconvolution may not work well here, however drizzling to improve resolution is an option, and can improve deconvolution results.

UNDERSAMPLING (<1x sampling) is basically going to perform the same as exactly sampling, your stars are going to be square. With undersampling, you pack more light into each pixel though, so undersampled imaging can be much more sensitive, which can be a bonus for exposing faint but larger objects. Large objects have larger scale details, so the loss of resolution here is not an issue in these cases. Deconvolution will often not work well here, however drizzling to improve resolution is an option, and could potentially improve deconvolution results (although you may need to drizzle fairly significantly...3x, 4x...which can require immense amounts of memory and processing power.)

NYQUIST Sampling (2-3x sampling) is going to sample the star at the minimum rate necessary to produce non square stars...in many cases. It will not always produce non-square stars...as you can see, there are cases where even sampling at nyquist is going to give you square stars. Further, in the ideal case for getting the least square stars...nyquist is still going to deliver somewhat blocky results, more along the lines of a rounded corner box than a perfect square. You can resolve some good detail but balance that out with still getting good SNR in a reasonable amount of time here. Deconvolution of the resulting integration can work well here. Drizzling might help with resolution, but may not be as effective with undersampled data. Better-sampled data will usually be superior to drizzled data anyway.

OVERSAMPLING (>3x sampling) is going to sample the star at the rate necessary to produce non square stars...in all cases. It will always produce non-square stars, although in it's worst possible configuration, they may be more rounded corner box than a perfect square. There are many variations possible here between the ideal case (left) and the worst case (right), so stars usually show up pretty round when oversampling. You can resolve excellent detail here, but the more you oversample, the lower your SNR can get if you are not able to sufficiently swamp camera noise. Deconvolution will usually work very well and can be pushed to its limits here. Drizzling is not usually effective with oversampling, and is not generally necessary.

Something else about sampling. Registration will translate each frame so the star positions match the reference frame. The more undersampled you are, the more this process can affect the stars in the final integration. Each sub will sample the same stars slightly differently, and if you dither, those differences will be larger. Over time, you'll eventually sample the stars in most of the possible ways, and the distribution of possible ways of sampling each star should be gaussian. Undersampled and nyquist sampled stars will all be blocky, but as you shift and rotate the frames to align them, they will soften and bloat as the energy of each star is interpolated into new pixel configurations, which can often result in some of that energy spilling out into additional pixels. The final integration will often show rounder stars...which is good, but they will also be larger stars. This can lead to the "dominant stars" issue in more undersampled images, where the stars dominate the image, being much larger in relative terms than they are in real life, and often much brighter than the background details of interest.

With oversampled stars, the stars will be much rounder in most of the possible sampling options, and their energy will already be distributed among many pixels. Shifting and rotating may redistribute energy among those pixels, but it will not necessarily spill a lot of that energy out into a greater number of pixels. So your stars will not bloat like they will with undersampled or nyquist sampled stars. The more oversampled you are, the more likely this is to be the case, however beyond about 4-5x sampling the loss in SNR per pixel is likely to outweigh the potential benefits of maximizing resolution (in terms of FWHM).

Note that drizzling undersampled data can help improve how round the stars appear in the final integration, but it will usually not have much of an impact on how small the stars get, as registration and the blurring that comes with it happens before the data is drizzled. The rounding up of stars with drizzling can help them look tighter, even if they are not technically smaller in FWHM terms.

Edited by Jon Rista, 22 June 2018 - 06:00 PM.

• sink45ny, AndrewXnn, ks__observer and 1 other like this

### #8 miwitte

miwitte

Apollo

• topic starter
• Posts: 1092
• Joined: 28 Dec 2016
• Loc: Charlotte NC

Posted 23 June 2018 - 02:12 PM

Yeah I think I have those terms backwards....

Since seeing is typically 2-3 arc sec, I need to be 1-1.5 for perfect Nyquist sampling correct? Bracken then states that rule of thumb is that photosites(pixels) should not be substantially smaller than 1/2 the maximum resolution does that mean the same thing or do we have to half the Nyquist sampling number of 1-1.5? I'm stuck here but I think I've perhaps misinterpreted these two and really we want to shoot for a image scale of 1-1.5 not .5-.75.

Basically do we want our image scale equal to 1/2 our seeing? Also if I go with the EdgeHD and a .7 reducer my scale is.52(oversampled) which may give me harder tracking(using the OAG) and bloating. Idealy I would want a camera with pixels in the 7um range to get me back to 1 correct?

At the end of the day I need to be looking for scopes in the 1000-1400 range that will work weight wise with EQ6 and 1600 camera and with my price point with accessories(2-3K) the 8" Edge is a pretty good contender. Again just looking but wanted to understand this point as I've seen this come up before and never got a good grasp(Still not sure I do)

From a guiding standpoint, I routinely guide in the .6 arc sec range sometimes with good seeing my dec goes to .4 and RA is .5 so I am pretty confident I can guide a bigger scope  with the upgrades to the mount. ive done.

### #9 Jon Rista

Jon Rista

ISS

• Posts: 23349
• Joined: 10 Jan 2014

Posted 23 June 2018 - 09:15 PM

IMO, if high resolution details is your goal, Nyquist is insufficient. I believe you want at least 3x sampling to sample details in all directions in two dimensions well enough to acquire all the necessary information to reproduce the image as accurately as possible, without undue loss of SNR.

However, if attaining maximum resolution is not your primary goal...then you really don't need to worry about it. You can image at any scale you want to. Anything less than 3x sampling and you will improve SNR more and more for a given exposure time. Radically undersampling isn't really a huge problem, although worse than about 2"/px or so and you will start to run into the dominating stars issue (this is where really tiny pixels on very short, fast scopes can be useful.)

Sampling isn't too terribly critical unless you have very specific goals, notably maximum SNR in the least amount of time or maximum resolution.

### #10 miwitte

miwitte

Apollo

• topic starter
• Posts: 1092
• Joined: 28 Dec 2016
• Loc: Charlotte NC

Posted 23 June 2018 - 10:54 PM

Thanks Jon that makes sense. So in my case my image scale of my setup is 2 arc sec, and the last couple times out I've had FWHM in the low 2 range so where does that put me I'm still not 100% on the over/under thing.

what I really want accomplish is to be able to image like the nice m16 and M51 that folks have been posting here the last week with their edges' and other scopes so to do that I need more focal length. I want to make sure I understand the image scale thing so I choose the right OTA that will work with what I have currently and what camera I may need down the road to maximize my resolution.  I am pretty sure that I'll also need a different camera later on to maximize the potential but I think a longer focal length and the 1600 will give me better results on smaller DSO's or details like the pillars of creation than I can do now.

### #11 freestar8n

freestar8n

Vendor - MetaGuide

• Posts: 8681
• Joined: 12 Oct 2007

Posted 23 June 2018 - 11:16 PM

"Nyquist" refers to a theorem - and that theorem involves the combination of a special input function - and special treatment of the output.  It doesn't really apply to imaging as it's done here - where we just sample an object - and then look at those samples as discrete pixels.  The finer you sample the object, the less the pixels will impact the view - and there is continued improvement in the image as you sample finer and finer.

There will be a limit to how much more detail you can actually see as a result of the finer sampling - but for stars especially - the more you sample them way below the diameter of the first Airy ring - the smoother and better they will look.

The Nyquist theorem says that if you sample the function at 2x the highest frequency, you can then recover the function *exactly* in a continuous way - if you take the discrete samples and put them through a low pass filter - and generate values at every point over a continuous interval.  But we aren't doing that with images.  We sample at a bunch of points - and then look at those samples.  So the density of samples has direct impact on the final image independent of what Nyquist "says."

The main motivation not to sample too finely is that you may end up with a more noisy image - due to the noise contributed at each pixel - and the fewer photon counts in each pixel.  But with less noisy sensors there is less downside to sampling very finely.  And you can always smooth or bin the final result if there is no benefit from the finer sampling and noise is a problem.

My view has always been that under or oversampling should be matched to what the imager cares about.  If you want a wide field and deep view then don't worry about undersampling - as long as the stars aren't noticeably blocky.  And if you are more concerned about high res - then go long focal length and small pixels, in terms of arc-seconds.  You may need to expose longer - but there is always benefit in smaller pixels for detail - though there are diminishing returns as you go smaller and smaller.

Frank

• ks__observer likes this

### #12 jhayes_tucson

jhayes_tucson

Fly Me to the Moon

• Posts: 7011
• Joined: 26 Aug 2012
• Loc: Bend, OR

Posted 24 June 2018 - 12:08 AM

You can get often get the benefits of higher signal strength (which results in lower exposure) along with the benefits that come with a higher sampling rate by simply drizzling your data.  Of course this won't work very well if the image is sampled at too low of a rate.  With my system, I've found that by sampling across the Airy disk with 1.6 pixels, I get sufficient sampling across the seeing blur spot to get good result even without drizzling.  I operate at a pretty long EFL (~3900 mm) so if we assume a seeing limited blur diameter of 1", I get a minimum sampling rate of 2.1 pixels across the spot with my sensor.  Of course that number will simply scale with the seeing conditions.  Under most common seeing conditions where I operate (1.6" - 2.2") that rate (3-4 px/blur diameter) works pretty well to produce nice round stars.  Under almost any reasonably good conditions, drizzling often makes the star images appear noticeably better.  Since drizzling is effectively increasing the sampling rate, image detail increases as well but the improvement is not as dramatic.  The limitation is that even though the sampling rate increases, the pixel size remains the same, which limits the amount of improvement that you get in sensor MTF with drizzling.  Still this is a good approach that combines some of the benefits of smaller pixels with the benefits of larger pixels.

John

Edited by jhayes_tucson, 24 June 2018 - 11:52 AM.

### #13 miwitte

miwitte

Apollo

• topic starter
• Posts: 1092
• Joined: 28 Dec 2016
• Loc: Charlotte NC

Posted 24 June 2018 - 07:06 AM

I do drizzle my images as there is a noticeable difference. The answer I am looking for is with my small pixels of 3.8 um will I get the details I am looking for going from 384 mm to 1000+ Focal length ? I am pretty sure it's yes as I've seen some fine photos with that setup but at what focal length would the pixel size limit detail?

### #14 ks__observer

ks__observer

Viking 1

• Posts: 981
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 24 June 2018 - 08:25 AM

but at what focal length would the pixel size limit detail?

It depends on your asp for your set-up and your seeing.

As noted above: "beyond about 4-5x sampling the loss in SNR per pixel is likely to outweigh the potential benefits of maximizing resolution."

Beyond that point most people consider down-sampling /binning.

As noted above, high-res + low SNR vs. low-res + high SNR or somewhere in the middle is the choice people make.

Check Astrobin pix scale for various systems and FOV's, and how much you like to zoom in on the details in those pics, to help you decide what you like.

### #15 dkeller_nc

dkeller_nc

Surveyor 1

• Posts: 1524
• Joined: 10 Jul 2016
• Loc: Central NC

Posted 24 June 2018 - 09:16 AM

I do drizzle my images as there is a noticeable difference. The answer I am looking for is with my small pixels of 3.8 um will I get the details I am looking for going from 384 mm to 1000+ Focal length ? I am pretty sure it's yes as I've seen some fine photos with that setup but at what focal length would the pixel size limit detail?

There is no image scale that will "limit details" as you go to longer focal lengths or smaller pixel sizes (i.e., your image scale is going down) - that's a misinterpretation.  You can, for example, vastly oversample the seeing at 0.1"/px by pairing one of the small pixel scale CMOS cameras to a long focal length scope.  There will be no "extra" detail in images from that setup over one with a more reasonable image scale of, for example, 0.75"/px, though there won't be any less, either.  What will happen is that you will need longer exposures with the 0.1"/px setup to swamp read noise, and longer integration with more subs to effectively overcome shot and quantization noise to pick out low contrast details in a target.

As Frank notes, the problem of swamping read noise in a highly oversampled system used to be a big one with CCD based cameras that had read noise in the 10 or 15 e- range.  With a modern CMOS camera, that's a whole less of a concern.

I did basically the same thing you're considering, except a few months ago.  My existing setup was the same as yours - SV80mm, 0.8X FF/FR, ASI1600MM-C for an image scale of 2.04"/px.  I added a SV130MM with a 0.72X FF/FR and an ASI183MM-Pro for an image scale of 0.75"/px.  This has worked out very well for "galaxy season", with most targets I'm interested in occupying at least 20% of the field of view of the camera.

Swamping read noise by 5X with this setup has resulted in subs in the 2 minute range for L at a gain of about 2 e-/ADU (i.e., about 1/2 "unity gain") at a dark site.  For RGB, I use unity gain and 2 minute subs at the same "swamp multiplier".  I haven't done a whole lot of NB with this camera yet, but what I have done suggests that 4 minute subs at unity gain will be about right.

Edited by dkeller_nc, 24 June 2018 - 09:32 AM.

### #16 dkeller_nc

dkeller_nc

Surveyor 1

• Posts: 1524
• Joined: 10 Jul 2016
• Loc: Central NC

Posted 24 June 2018 - 09:51 AM

With respect to equipment choices, I went through much the same considerations as you;  I wanted something that my existing mount would handle (limit of about 30lbs with camera and all accessories), a focal length in the 700mm - 1200mm range, and a decent aperture so I could keep the scope/optical system reasonably "fast".  I considered an 8" ONTC F4 imaging newt from TS, an 8" Edge HD, a 5" apo triplet, and a few other less-common choices like a maksutov-newt.

The 8" SCT had an attractive initial price of about \$1200, but once I figured the price in for a replaced focuser so that the mirror could stay locked, a required FR to get the focal length down to something reasonable, an OAG (and a guidecamera that would fit) and a replacement thinner filter wheel to accommodate the narrow back-focus, the price wasn't so attractive.  The deal killer was that after spending about \$2500 on the setup, I'd still need to futz with collimation, tolerate longer time for thermal equilibrium, and still wind up with a relatively slow system at F7.

The 8" imaging newt was considerably more attractive since it would be uber-fast, have a much more reasonable focal length in the 800mm - 1000mm range, and achieve rapid thermal equilibrium because of its open tube design.  Teleskop Service's offerings got around my principal objection to most imaging newts - poor mechanics and questionable optics.  And their asking price was about right - high enough to pay for excellent mechanics and a stiff carbon fiber tube, but not stratospheric like ASA's offerings.  There were three main reasons why I rejected this option:  the weight was near the limit of what I thought my mount would handle, having to futz with collimation, and an open-tube design that made taking flats at the end of every imaging session mandatory (as opposed to a refractor, where flats last for months).  There was also the presence of diffraction spikes in images - they don't bother me for the most part, and I even consider them desirable in star cluster images, but others might think differently.

That brought me back around to a 5" refractor.  Thermal equilibrium is reasonably fast, the sealed optical system means taking flats every imaging session isn't necessary (presuming I don't take the optical train apart), it has an appropriate focal length for my purposes when paired with the ASI183MM-C, it's reasonably fast at F5 with the FF/FR, and I could get a tested and verified example from Stellarvue without having to sell a kidney.

So those were my considerations during the decision process - I would guess that others would make different decisions based on different priorities, but the characteristics of the optical systems remain the same.

### #17 jhayes_tucson

jhayes_tucson

Fly Me to the Moon

• Posts: 7011
• Joined: 26 Aug 2012
• Loc: Bend, OR

Posted 24 June 2018 - 11:51 AM

I do drizzle my images as there is a noticeable difference. The answer I am looking for is with my small pixels of 3.8 um will I get the details I am looking for going from 384 mm to 1000+ Focal length ? I am pretty sure it's yes as I've seen some fine photos with that setup but at what focal length would the pixel size limit detail?

Up until the seeing blur diameter is larger than the Airy disk, the sampling rate is driven by the focal ratio.  Once the seeing blur diameter gets significantly larger than the Airy disk, then effective focal length rules.  Let's use typical seeing of 2" for a system with a focal length of 1000 mm at F/8.  In this case, the Airy disk will have a diameter of about 10.7 microns and the seeing blur diameter of about 9.7 microns.  These numbers are pretty close, which means that 2" seeing is the close to the cross over point where seeing isn't hurting the performance very much.  At this point using a detector with size of around 5 microns (i.e. ~ 2x samples across the Airy disk) will give you as about as much detail as you are going to get under real world conditions.

In my system at F/11 (with an EFL of 3900 mm) under 1.5" seeing conditions, the Airy disk is about 14.7 microns and the seeing blur is 28.4 microns so seeing is clearly what drives the sampling rate.  In that case, my 9 micron pixels are sampling at ~3x across the blur diameter.  Going to a sensor with 3.8 micron pixels would just decrease signal strength and do little to help improve information content.  I could probably even use a 12 micron pixel and it would work pretty well.  This is why you see a lot of high-end scientific cameras with pretty large pixels aimed at professional astronomy applications where large telescopes are used.

If you really want to satisfy the Nyquist sampling requirements, you have to sample with very small pixels (actually with zero area) at twice the spatial frequency of the maximum spatial frequency transmitted by the optical system.  The maximum spatial frequency transmitted by any optical system is given by 1/(lamda*F), where F = focal ratio.  For our example system with F=8.0, the maximum spatial frequency will be 0.227 lp/um so you would have to sample at a rate of 0.45 lp/um.  That requires a detector spacing of 2.2 um or 4.86 samples across the Airy disk.  Remember that to strictly satisfy the Nyquist theorem, you'll need point sensors (not area detectors) so in the real world, you'll have zero signal.  The Nyquist theorem is indeed "real" and it would work in space (where there are no seeing effects) with perfect optics but there are some practical reasons that make it difficult to achieve perfect Nyquist resolution--even on a perfect system under perfect conditions so we normally just try to design systems to get close.

John

Edited by jhayes_tucson, 24 June 2018 - 11:56 AM.

### #18 miwitte

miwitte

Apollo

• topic starter
• Posts: 1092
• Joined: 28 Dec 2016
• Loc: Charlotte NC

Posted 24 June 2018 - 06:36 PM

John great explanation. Mr keller as always you have great suggestions that have helped me this past year. I too have been looking at the cost to truly make a 8" edge hd image worthy and the back focus could be a issue(I am just starting reasearch) I agree you need a focuser to lock mirrors, need the reducer (hadn't thought about the camera 120mm-s fitting on a thin OAG.) I don't want to go down the cheap RC path, and I think reading this the edge has lost its "edge" perhaps. So this will probably lead me towards another refractor. If you have a galaxy shot of the same object 80mm vs the 130 that would be very helpful.

as always thanks to everyone this stuff is pretty hard to digest and easy to make expensive mistakes.i just don't want to end up with a combo that gives me square stars!

### #19 Jon Rista

Jon Rista

ISS

• Posts: 23349
• Joined: 10 Jan 2014

Posted 24 June 2018 - 06:53 PM

Just curious...is a newt not an option? Big apertures, fast f-ratios...

### #20 dkeller_nc

dkeller_nc

Surveyor 1

• Posts: 1524
• Joined: 10 Jul 2016
• Loc: Central NC

Posted 24 June 2018 - 10:08 PM

Miwitte - Unfortunately, I don't.  I only took a couple of galaxy shots with the 80mm/1600MM-C combination, and that was just fooling around to convince me that LRGB imaging from my house was pointless.  I didn't bother with any other targets with that setup - it was pretty obvious just by enlarging the individual subs while acquiring them that my setup just didn't have the focal length needed.  And when I set up the 130mm/183MM-C in early March, I didn't think to try again on M51 - there was more interesting stuff like the Sombrero and the Leo Triplet, and because of the LP problem, I knew I could get at most 3 or maybe 4 targets this Spring.

With respect to the SCT, there's one more consideration for you to complicate the picture.  If you have an interest in planetary photography, you might find after doing the calculations that I did this afternoon that an 8" Edge is about the only game in town, at a minimum (a C11 or C14 would be much better).  It's not that you can't take shots of Jupiter, Saturn and Mars with a 5" refractor, it's just that to get an image where the planet spans more than 50 to 75 pixels you'll need a 3X or 4X Powermate, which puts you at F21 to F28.  Even though the planets are very, very bright compared to your average DSO, things get really dim at those sorts of focal ratios, and that means using a slower frame rate, which is the opposite direction that you want to go in to freeze the seeing and image at 0.1"/px.

I guess this is a "duh" moment for me, but there's just no getting around the fact that it takes special tools for specific applications - there's no one "do it all" camera or scope.

Edited by dkeller_nc, 24 June 2018 - 10:10 PM.

• Jon Rista likes this

### #21 Jon Rista

Jon Rista

ISS

• Posts: 23349
• Joined: 10 Jan 2014

Posted 24 June 2018 - 10:24 PM

I agree with Keller here. The Edge scopes are awesome for planetary. All of the best planetary work I've seen, the stuff filling up my favorites, were done with EdgeHD scopes. Usually 11" or 14"....but there are many with 8" as well.

### #22 ks__observer

ks__observer

Viking 1

• Posts: 981
• Joined: 28 Sep 2016
• Loc: Long Island, New York

Posted 25 June 2018 - 03:22 AM

Re Edge and planetary -- sorry in advance for straying off topic:

If your interest in getting an SCT is solely or even mainly for planetary, I think a non-Edge might be a better choice.

For the same priced Edge you can go up a size in a regular SCT.

The Edge is designed for a flat field across a full frame sensor.

For planetary you are only using a small dot in the center of the scope's output.

I wonder even with an ASI1600 4/3rd sensor how much field curvature you get with a regular SCT.

Re refractors and planetary:

The real issue I think is the improved Dawes limit with an SCT.

For lucky imaging you are basically Dawes limited and not seeing limited.

Edited by ks__observer, 25 June 2018 - 03:39 AM.

### #23 Swanny

Swanny

Mariner 2

• Posts: 279
• Joined: 12 Mar 2017
• Loc: AZ

Posted 25 June 2018 - 01:31 PM

Do you have the Bracken book on AP? It explains a lot of what you are asking as well as opening your eyes to what is most important for making great images (ie not all pixels are equal).

### #24 miwitte

miwitte

Apollo

• topic starter
• Posts: 1092
• Joined: 28 Dec 2016
• Loc: Charlotte NC

Posted 25 June 2018 - 08:56 PM

I do have bracken and that's what started this ðŸ™„

The reality is im trying to figure out what scope to get next and if my cameras pixels of 3.8um will work with longer focal lengths.

### #25 bobzeq25

bobzeq25

Hubble

• Posts: 16396
• Joined: 27 Oct 2014

Posted 28 June 2018 - 11:17 AM

I do have bracken and that's what started this

The reality is im trying to figure out what scope to get next and if my cameras pixels of 3.8um will work with longer focal lengths.

3.8 will work with longer focal lengths.  But I suggest not going _too_ much longer.

I'm going to be bold and just make a recommendation (or two).  Something in the 900-1000mm range.  You can look on my recent astrobin for some images with a 130mm F7 refractor.  Another possibility would be something like an 8 inch F5 Newtonian.  Those scopes (5 inch refractor, 8 inch Newtonian) are widely used.  Lots of choices for scopes and accessories.

That would be an image scale near 0.75 (assuming you get just a flattener for the refractor).  Pretty small, but not so small that it would be useless in ordinary seeing or with what I assume your tracking is.

## Recent Topics

 Cloudy Nights LLC Cloudy Nights Sponsor: Astronomics