Ignoring for a moment the differences in everything else (better resolution, full bandwidth and use of filters/wider band sensors) is there some kind of rule of thumb about how much light gathering is gained by stacking Gen3 night vision to a telescope with a mid-sized lens vs just using a larger telescope? I thought I had read once gaining 2 magnitude though I forget where so i'm using that.
Making my thinking public - so that people can point out if i'm wrong, if I follow that logic then if the limiting magnitude is 6 in a rural area for naked eye, I should be able to see 8 through the NV itself. (the difference may be even more in an urban area - if I can only see 4 in an urban area, could I see more than 6 from the urban backyard? I'd just heard NV was the ticket for urban viewing and wondered if it was more than just light gathering)
So if I take even basic 50mm binoculars or telescope to the rural area if I believe some online charts https://twcac.org/Tu...itude_table.htm I might be able to see magnitude 11 rurally, and putting the NV behind it would mean seeing down to magnitude 13, almost comparable to a 5 inch telescope. Is that about right? Same chart, an 8 inch scope viewing magnitude 14 with NV (gaining 2) would be almost comparable to a 20 inch scope listing as magnitude 16...
I'm just referring to raw viewability of stars and deep space objects and the reason for my asking is to try to get some idea of how much larger a telescope would have to be to be remotely comparable for realtime viewing. Ie - someone's 8 inch scope with an NV tube on the back letting me see what I might expect to see without the NV tube in a 20 inch scope. Also potentially figuring out the sweet spot where paying $3000 for an NV tube costs less than 2.5x more aperture.
Edited by bigdobsonfan, 02 July 2021 - 08:28 AM.