Jump to content

  •  

CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.

Photo

Future of Image Intensifier for Hand Held Use

NV
  • Please log in to reply
21 replies to this topic

#1 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 04 May 2020 - 03:12 PM

First, some of this will be funny if you are super nerdy or geeky I think.  But maybe not.  I mean I ain't British, with their highly wit, though I do think I have the charm and refinement parts down.  I drink my Gatoade shaken, not stirred, and with a loquat twist when the squirrel kicks some out of the tree when I am out on the patio in deep thought, which has to happen fast before the caffeine kicks in and I want to get up and floss dance.  

 

Last night, I watched a Youtube video where the poster speculated that advanced sensors like the Sony A7 would perhaps rival image intensifiers in the next generation.

 

On that point I have serous doubts and this post can serve as a way for me to express those doubts and have a dialog about them if my reasoning seems to be flawed, which is possible, but to my own way of thinking, doubtful because I just finished a "Gatorade."  Wink wink, nudge nudge....

Now immediately into the video, the author conceded that current sensors were not nearly as efficient and sensitive as image intensifiers.  I think that while the next generation of sensors might be better than they are today, I seriously doubt that huge gap that exists in sensitivity will be closed to any meaningful degree which would allow the sensor to come close to competing with an image intensifier in terms of low light performance in a real time image. 

 

Let's take the example of the SiOnyx, which is one of the most sensitive sensors currently made due largely to its very broad spectral response.   I have seen images taken with these sensors that did not have as good a resolution as my Mod 3 (much more on that) and were taken using 1.5 seconds of exposure.  Now let's really really measure how far away from an image intensifier this really is.

 

The human eye needs a frame rate of higher than 20 frames a second to appear smooth to the observer and 24 FPS is the common frame rate often used in something like aerial drone footage.  A 24 FPS frame rate, which would be required to preserve smooth motion at the very low slew rate would equal a shutter speed of slightly longer than .04 seconds per frame.  Now if the SiOnyx requires 1.5 seconds to get a view of a single frame that has the smootness of the real time view in an image intensifier, the sensitivity would have to increase 45  fold.  For live image reproduction then, it is highly unlikely that silicone based sensors will achieve this level of sensitivity for many sensor generations, if ever. 

 

Next is resolution.  I often see this is a major advantage of digital sensors, but is it really???

Let's explore this more fully. A modern image intensifier with 72 line pair per millimeter is readily available today. Now what does that really mean?  Well, the photocatode of a modern image intensifier is about 17.7mm in diameter (I have heard 17.8 and 17.7, but let's be conservative and give the Sony sensor a break and go with 17.7mm). 

 

I line pair is two lines. On line is white and one line is back.  that means that 72 line pair per millimeter is not 72 lines, it is 144 lines, 72 white lines alternated with 72 black lines, so again, a total of 144 lines per millimeter. Over the full diameter of the 17.7mm circle, the image intensifier can display a staggering 2548.8 lines.

 

Now, lets move to the Sony sensor.  The sensor has a video resolution (and real time viewing is what we are talking about so not max resolution that one could use in a very long exposure image with is as far from real time as one can get) of what sounds like an impressive 3840x2160 pixels. Hey, Ed, that sounds a lot better than the 2548.8 lines of the image intensifier!  Well, no, it isn't even close.  See, to make a fair comparison, we have to see how many lines we could put in the same size circle as we could using the image intensifier.

 

Now the 2160 pixel count is the number of horizontal lines on the sensor and the 3840 is the number of vertical lines.   Let's start by using the 23.9mm height of the sensor.  With 2160 lines from top to bottom, if we were to cut a 17.7mm circle out of that sensor, we would find that in 17.7mm, the line count would be reduced to 74% of the number in the full height of the sensor, and now we are left with only 1598 lines. What this means is that if you were to make the sensor the exact same size, a 17.7mm circle, and put it into a housing of the PVS-14, the resolution would be considerably less than that of the image intensifier.

 

The argument is going to be "well, the sensor is a lot bigger" and yes, it is, but to make the device small and portable, it is impractical to have a very large sensor.  Le'ts explore that further.

 

In recent post, we examined the vignetting characteristics of the modern Gen 3 night vision device.  We know that the fully illuminated circle size of the photocathode is only about 10mm at best when using the Mil-spec objective.  Now we could change the objective to make it fully illuminate the full diagonal of the A7 sensor or even a smaller area of the sensor, but to do so, the flange would have to be quite large and the lenses would be hugely bigger, heavier, and more complex than the simple lens of the PVS-14. In other words, to get a well illuminated field over the entire sensor, the device would have to be the size of oh, let me think about that.. Ah, I am trying to think of something we can all relate too.. Ok, I know, it would have to be about the size of a Sony A7 Camera and to get the same speed, the objective would have to be the size of oh, a big hunk of SLR glass. 

 

But it gets worse.  Now, we have to be able to view the screen in real time when it is attached to our helmets.  Let's say we put an Apple Watch display at the focus of our Mod 3 or PVS-14 eyepiece. Today, the Apple watch has a pixel density of only "only" 326 pixels per inch or put into PVS-14 screen size, 241 lines, or 13.6 lines per millimeter!!!   There is no output screen today even close to having the resolution that would allow for packaging in a hand held and helmet wearable display!

 

Ok, let's be creative.  Let's use our PIMAX 4K Gamer headset.  Now we are getting 1920 x 2160 resolution and while the resolution is not as good as the image intensifier, the huge apparent field is a compelling benefit but in the end, you have a two pound camera/power supply on your helmet and a Cell phone stuck four inches in front of your face.  Now putting it on your helmet is not the best idea because of the parallax error. Hint, hold your rifle up in front of the lens.. Otherwise, just figuring out the parallax error and whether you should aim high or low will hurt your head.  When you do this, make sure you put a pillow in front of your face because even an unsupported M4 rifle will probably have enough recoil to plane the buttpad into your nose.. See, I do think a lot about stuff like this..  

But let's streamline it and make it a monocular.  So, half of your PIMAX 4K (no 3D for you!) with a Sony A7 stuck to the front and a couple of good high capacity batteries and you still have to wait a second and a half to see what we can see today in a Mod 3 in real time but you do get that big apparent field that everyone lusts for. (You can have your Stereo back if you add a second Sony A7,, That'd be cool..  I would look soooooo freaking awesome if I did not get arrested walking around the neighborhood at night while wearing it!) 

 

And this is the most pressing argument for not expecting to see the Gen 3 image intensifier go away in a very long time. The US Government would surely know about any defense contractor pursuing a breakthrough in imaging technology and would probably be funding part of the research if it looked promising.  And if they saw something on the horizon, they would not be putting out Omni IX contracts. No, there is nothing out there that is going to replace the Gen 3 technology in a hand held or wearable device in even the mid term future unless some incredible breakthrough occurs.

 

Now for mounted use, that is a different story. If you abandon the need to keep it hand held an instead, say it just has to fit inside of a tank where the driver will watch where they are going on a IPad from his fighting bunk in the barracks, well yeah, that may happen in a three or four generations of Sony sensor if they could live with 15 frames per second of video feed.  

Arial applications also will be able to use advanced sensors where signal processing computers can upscale the resolution and smooth the frames, but the graphics cards alone for this would consume more power in a minute than a PVS-14 consumes in a month.  Imagine having to velcro a NVIDIA TITAN RTX graphics card on to the back of your helmet, though that would help counter-balance the big SLR lens, Sony A7 size sensor housing and PIMAX 4K monoclar (at this point, should we just go full binocular with an Apple 10 phone and a cardboard 3D holder to save weight???  If so, just make sure you are in Airplane mode before going on patrol. Getting a text from your GF during a fire fight would be a good way to move her along to her next boy friend.  Sad for you though.. LOL,) 

 

So, I am not saying that sensors won't get better and won't find more uses, but what I am saying is that the gap between using an all digital hand held device that can show 24 frame per second video at the illumination levels that image intensifiers can work with, barring some sudden monumental breakthrough, are probably decades away. The sensors will have to have orders of magnitude increase in sensitivity and density and graphics processors (because it is unlikely that the frame rate can be achieved by sensitivity advancements alone) and the output display technology will have to be orders of magnitude more dense or it will be impossible to match the form factor and performance of even the current generation of image intensifiers. 

 

Now this is me looking into my very cloudy crystal ball. And maybe some of my math or logic is faulty and I expect the forum members to correct me if that is the case.

 

I make no apology for those errors or faults.  This is just  convergence of many things we discuss in the forum and I see elsewhere and I like to think about things like this to keep my mind from turning into a social media junkyard.  I love image intensifiers and I think a lot about them, and form time to time people ask me if there will be a breakthrough and that has caused me to consider the obstacles of simply buying a Sony A7 chip and trying to mount it in a Mod 3 and making it work in a telescope for real time observing.

Now I could just stick my SiOnyx camera in my focuser and wait 1.5 seconds, then download it  in my computer and watch the image in its full 720P glory, but if I were going to do that, I would just buy a Revolution Imager and wait 8 seconds.  Walking anywhere with it on my bump helmet would be either slow or dangerous or both, but I could make jerky, streaky time lapse of my plunge off of the cliff to the superhero style landing, which I have studied hard, and think could replicate.

 

 

So, I look forward to any dialog or discussion this might generate. Maybe there are ways to do it that I am to slow to have envisioned, but I have tried mighty to figure out where we would have to be to come close to the real time performance of modern tubes in a compact device (not even considering battery life) and this is the best I could do which is to say that it is a daunting problem that I do not think can be easily achieved in the course of a single generation of sensor evolution, and maybe not even in several.  It is a very difficult problem. 

 

While we think of Gen 3 as "old" technology, the fastest operational airplane ever flown was built when I was a kid and nothing today is faster.  The most powerful rocket engine ever made was built when I was a kid and nothing today is more powerful.  Who is to say that the Gen III image intensifier won't be around for a very long time when were we sit today, the gap is very large.  

 

Your thoughts????

  


  • Starman27, Joko, ArsMachina and 2 others like this

#2 torex

torex

    Lift Off

  • -----
  • Posts: 17
  • Joined: 11 Apr 2020

Posted 04 May 2020 - 03:52 PM

Well image intensifiers are old, but they still improving, sensors have still quite a bit to catch up. I dont think sensors will get better if image intensifiers will keep improving, but i would definetly would like sensors be as good and maybe cheaper, more global friendly for astronomy.



#3 Hilbily

Hilbily

    Mariner 2

  • *****
  • Posts: 295
  • Joined: 19 Aug 2014
  • Loc: VA

Posted 04 May 2020 - 04:36 PM

Gen 3 is only getting better and cheaper, for the specs you can get now!

Military swir and multispectral sensors are unreal in what they can see, and see thru.



#4 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 04 May 2020 - 05:06 PM

Gen 3 is only getting better and cheaper, for the specs you can get now!

Military swir and multispectral sensors are unreal in what they can see, and see thru.

Yeah, a lot of people think that silicon sensors are somehow close to being equal to Gen III, but not in real terms of resolution or senstiviity.

 

While the standard NV tube has an 18mm window, even in Gen II, the window was 25mm.  Now suppose you had a 64 line pair tube 25mm in diameter.  Even at this lower resolution, you would still have 3200 lines horizontally and that is more than the Sony A7 in video mode.   

This is why the "Sensors have better resolution" argument is kind of a difficult one to defend. If you make them the same height, the Gen III, even at 64 lines, has better resolution.

 

Now even the 25mm Gen III image intensifier is way to big for hand held use.  I mean I guess you could do it, but it would be about the size of a Sony A7 by the time you put a big objective on it capable of well illuminating the field but at least you could have a way to see the image, which is not possible with any real resolution currently using build in LCD displays.   

 

The only thing preventing making an image tube 35mm wide and 25mm high is that no one wants one.  The eyepiece would have to be the size of a 31mm Nagler.  

 

So yeah, barring some breakthrough, we are several generations of evolution before sensors will have the sensitivity, resolution, and output technolgy to replace the "old" PVS-14.  Uncle Sam knows that waiting is not a good idea. 



#5 DavidWasch

DavidWasch

    Messenger

  • *****
  • Posts: 419
  • Joined: 08 Apr 2008
  • Loc: CT

Posted 04 May 2020 - 06:13 PM

I like the way you break things down, Eddgie,

 

The issues you outline fall into a few camps:

- light response 

- resolution

- processing power

- ergonomics

 

Out of these, I think (or vaguely opine, since I'm no engineer) issues around light response and resolution will be the most daunting, especially since they work in concert; higher resolution tends to diminish light response, since more of the sensor chip is taken up with the spaces between the photo-diodes (fill factor) as well as other factors related to readout noise.

 

Processing power and ergonomics will take time to resolve, but processing power is still increasing at a good clip. Although not at the rate of Moore's Law anymore, technological heft is continuing to come in smaller and smaller packages.

 

At the end of it, I think image intensifiers will not so much be eclipsed by ccd sensors, as be subsumed into them. We are probably many years away from it, but I could see a future ccd chip with a version of a micro-channel plate between the photo-diodes and potential wells.

It would be a ccd sensor on intensifier steroids.



#6 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 04 May 2020 - 08:24 PM

 

Out of these, I think (or vaguely opine, since I'm no engineer) issues around light response and resolution will be the most daunting, especially since they work in concert; higher resolution tends to diminish light response, since more of the sensor chip is taken up with the spaces between the photo-diodes (fill factor) as well as other factors related to readout noise.

 

 

Yes, this is the throttle. As good as the some of these sensors are, if a smooth, flicker free image in starlight conditions is required for war fighting or for real time direct view astronomy, digital is unlikely to challenge analog for a very long time.

 

If one can tolerate a 1.5 second delay though, a camera like an A7 and an iPad can do that now. If one can tolerate a bit less resolution than the Sony offers and can wait 8 seconds, then that is far less expensive.  But neither of these is what we call "night vision."  We call these "EAA."

 

This was just kind of a thought exercise for me though, and maybe some of my assumptions are wrong, but I see remarks that digital will soon replace analog night vision, and barring some breakthrough, I don't think that will happen. Compliment it in some applications? Yes. But it is not close to replacing it.



#7 Jethro7

Jethro7

    Vanguard

  • *****
  • Posts: 2,212
  • Joined: 17 Dec 2018
  • Loc: N.W. Florida

Posted 04 May 2020 - 08:59 PM

I have heard rumors that DARPA has been testing personal nightvision devices that can see color in total darkness, ( Ive heard no more intensifier tubes must be CCD or CMOS) I personally cant wrap my mind on how this is done. There would have be some sort of light, ambient or some kind of illuminator. Who knows? Now DARPA is going to come and take me away HAHA. HAPPY SKIES Jethro.



#8 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 04 May 2020 - 10:10 PM

Well, I think generating an image from a totally dark space would be possible.  Perhaps using radar or sonar I could sweep an area and using a processor,  could then generate an image on a screen, but that would be not be color. 

 

There has been some study about using infra-red light and sensors to measure slight differences in temperature, but this is an active emitter. 

 

Maybe I could generate false color to give texture or something, but if there are no photons for light, I am not sure how one would get true color perception.  But I could absolutely see using a small phased array radar to map the area in front of me.  I just don't know how I would do color. 

 

Now I could do a thermal overlay and that would give me color but not "true" color. 

 

You would have to actively emit to generate an image and if you wanted to move around, you would have to transmitting 20 bursts a second to get a smooth, jitter free display. 

 

Also, we still have the problem of size and output display. I mean I could put the output on a tablet or phone size headset display, but now we still have the issue of mounting it.  Could I mount the whole thing on a rifle?  Would it be hand held with the ocular in line with the emitter and collector?

 

Of course this would be useless for astronomy but I think it would be totally feasible as a way to "see" in the dark.

 

But there are some bright people out there and this would be a "breakthrough" technology but I would think it would have to be fundamentally different than traditional imaging sensors. If there are no photons, then it has to be some form of "mapping." We do live in the 21st century though, and there are people a lot smarter than I am out there, but like you, I can't wrap my brain around how they will do true color if there is total darkness without the system being an active emitter type of system. 

 

Again, there are some smart people out there, but if would seem that it would have to either be an active emitter map type system or some kind of non-photon based sensor because if it was completely dark, there would be no photons.

 

Hey, that is fair though. I did not say it had to be photons.  If you can't put it on a rifle though, it won't replace the PVS-14 or Mod 3 and it might not be useful for real time astronomy. 

 

Also, I heard that they had the space drive from the flying saucer that crashed at area 51.


Edited by Eddgie, 04 May 2020 - 10:20 PM.


#9 Wildetelescope

Wildetelescope

    Mercury-Atlas

  • -----
  • Posts: 2,567
  • Joined: 12 Feb 2015
  • Loc: Maryland

Posted 04 May 2020 - 11:16 PM

I have heard rumors that DARPA has been testing personal nightvision devices that can see color in total darkness, ( Ive heard no more intensifier tubes must be CCD or CMOS) I personally cant wrap my mind on how this is done. There would have be some sort of light, ambient or some kind of illuminator. Who knows? Now DARPA is going to come and take me away HAHA. HAPPY SKIES Jethro.

DARPA does not come to take you away;-).  That is what the FBI is for.

 

jmd 


  • Jethro7 likes this

#10 hoof

hoof

    Surveyor 1

  • *****
  • Posts: 1,998
  • Joined: 07 Apr 2005
  • Loc: Monroe, WA

Posted 04 May 2020 - 11:50 PM

Great writeup, Eddgie.  You've summed up why current CMOS sensors won't replace analog image intensifiers, the basic principal of accumulation buckets that those sensors use simply isn't competitive for real time low light viewing.

 

However, there is a tech I've been loosely monitoring that *does* have the capability to one day replace image intensifiers, at least for non-handheld uses.  Don't get me wrong, it's at least a decade or two away, but the underlying principal is what has me interested.

 

https://www.npr.org/...s-what-you-cant

 

Like the analog intensifiers, this one has elements that fire upon a mere photon hit.  Sensitivity to extreme low light in real time is what this thing essentially does.  It's main drawback is the method of extracting info fast enough (especially as light levels rise), and the sheer processing power necessary to extract and present an image.  But solve those two problems (possible in the next 20 years), and you have a digital sensor setup that can greatly exceed (from what I can tell) current analog image intensifiers, depending on the number of photons required to "fire" a cell (I think it's 100 or so for analog devices?)

 

The best part of something like this is that since it's inherently digital, and requires oodles of processing power to work anyway, you now have access to all the usual digital processing pipelines, including AI-driven noise reduction and edge recognition (see NVidia's RTX ray tracing application available today), overlaying anything you want, machine-learning based recognition (the system might recognize your target before you do, or show you what you're looking at in the sky directly), digital stabilization (image shift etc), intelligent exposure and per-pixel exposure control, pretty much any digital processing you can think of, none of which are possible with an analog device unless you stick a camera on the amplified portion of your tube attached to a high end computer smile.gif  And if the sensitivity is at least 3-4 times better than analog devices, you can add a Bayer-type filter on it, and get full color images with analog-intensifier-level amplification and S/N.

 

We'll see if this ever comes to fruition (the computational needs are daunting).  But like with most things, the seeds of what replaces the mainstream of today can be spotted decades before they disrupt, think the Apple Newton vs the iPhone, Microsoft's original tablet PC (from 2003) vs the iPad, digital signal processing in the 60's leading to the digital audio revolution (once DSPs/CPUs got small and powerful enough), etc.

 

(edit) Just remembered, the guy who is driving this new tech is the same guy who helped invent CMOS based imaging decades earlier.   The guy seems to know his stuff, and his previous work is central to virtually all image-collection technology used today.


Edited by hoof, 05 May 2020 - 12:01 AM.


#11 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 05 May 2020 - 07:28 AM

That is a good read and yes, I think that we both agree that some amount of signal processing will be essential.  This appeared to be limited to a still image, but that is pretty impressive! 

 

And of course we already have actively cooled sensors to reduce noise so that is not a surprise at all

 

Probably the limit of the photocatode is spontaneous emission.   EBI is also improved by cooling, but even at room temp, EBI can be quite low. I suspect that some further improvements will be made to analog technology in 20 years as well..  I mean you could build in active cooling even now at the sacrifice of size, weight and power consumption.  But once again, this is why I think it would be hard to replace the analog tube. It is smaller than a D cell battery and it lets you see in the dark without anything but a common battery and some glass lenses. 

 

This was a good read though and thank you for posting. I am interested in advancements in this field, and while this particular one might not replace analog tubes for real time views in the intermediate future, is does seem to have very strong promise as night vision camera.  And he hints of something different in the works as well!



#12 Jethro7

Jethro7

    Vanguard

  • *****
  • Posts: 2,212
  • Joined: 17 Dec 2018
  • Loc: N.W. Florida

Posted 05 May 2020 - 08:31 AM

Nice write up Edggie, 

I would have loved to have had  the device I own now, early in my service in the 70's and 80's

As time went by every generation of nightvision devices improved exponentially. For the pilots that flew missions with these early devices. My hat is off to them, these devices were hard to navigate with, even for ground operations. I cant imagine having to fly with those early devices.Even today pilots tell me the newest devices are still dangerous and tricky to use. On some other topic threads I can not really explain how much of a game changer my PVS 14 is to others that have never used or looked through one. I did not understand really until I did. Maybe after I recover from the purchase of a 15" Obsession Dob, I may look into picking up a MOD 3  device. "HAPPY SKIES AND KEEP LOOKING UP"Jethro

 

P.S. If the FBI wants me I'm easy to find.



#13 Wildetelescope

Wildetelescope

    Mercury-Atlas

  • -----
  • Posts: 2,567
  • Joined: 12 Feb 2015
  • Loc: Maryland

Posted 05 May 2020 - 08:56 AM

Well image intensifiers are old, but they still improving, sensors have still quite a bit to catch up. I dont think sensors will get better if image intensifiers will keep improving, but i would definetly would like sensors be as good and maybe cheaper, more global friendly for astronomy.

Image intensifiers are no more an "old" technology than CCD or CMOS(CCD goes back to before the Moon shot, CMOS back to the late 90's at least).  The Tubes we can buy today are the result of an extremely well funded R and D program that has been sustained for decades, and will likely be sustained for decades more.  It is true that the light sensitivity of commercial CMOS sensors has grown by leaps and bounds.  For about 100 bucks I can get a handheld monocle that is basically a cmos and an IR blaster that is MORE than sufficient for me too see what critters are running around in the woods behind by my back yard when I take the dog out tonight.  Truly amazing.  However, it will not let me see M42 in real time.  I would need a bigger IR blaster for that:-).   For my money, I would bet that what you will see in the future will be the marriage of the two technologies, rather than an either or competition.  That is already playing out here in this group, where everyone is strapping cell phones to their NV gear:-).  I feel confident that we are not the first folks to think about doing something like this.   Ultrahigh speed cameras used to monitor ballistic events have been doing this for 30 years or more.  It is common to get stuck in an either/or mindset, when in reality it should be what is the right tool for the job.   Each technology has its strengths and weaknesses.  

 

I have often said that most folks do not fully appreciate how amazing the technology is that we have access to as amateur astronomers with respect to optics, cameras, image analysis software, Robotic controlled mounts, etc....  Even the expensive, "Rolls Royce" gear. is cheap compared to what you can do with it.   Multiply that sentiment times 10 for the NV monocle you have in your hand.   this is the ONE area in the sciences where cost is not a barrier for the amateur making real contributions.  What would Kepler or Brahe have been able to accomplish with a C8 and an CG5 mount?   What an amazing hobby and an amazing time to be an astronomer!

 

Cheers!

 

JMD 



#14 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 05 May 2020 - 12:07 PM

I like your point!

 

Well, my "old technology" remark was based on the Youtube video I mentioned.  He was the one that said it was really old and while he did indeed recognize that improvements had been made, he clearly conveyed the impression that something that would probably not stand up to the next generation Sony sensor but my post was of course just my way of saying that the gap is much larger than he might perceive in terms of being equal to night vision because night vision is real time and he would still be taking a picture, and even if they could double the sensitivity he would only be at two frame per second at best. 

I like your post though and yes both have been around a long time now.



#15 DavidWasch

DavidWasch

    Messenger

  • *****
  • Posts: 419
  • Joined: 08 Apr 2008
  • Loc: CT

Posted 05 May 2020 - 02:54 PM

I think the matter of intensifier tube simplicity will be a major issue, particularly for the military investment needed to scale up CMOS technology.

 

CMOS seeing is a combination of several complex subsystems (low photon recording, encoding, decoding and visual presentation), and to push the envelope, even more subsystems, like artificial intelligence, will be needed.

 

The analogue tube is so self-contained, it's pretty much the whole system in one piece, only needing some physical support (regulated voltage, optics, headgear -- all of which would also be needed by a CMOS system). Tubes are driven by materials science, which is more direct.

 

The more subsystems, the more opportunities for failure. When the environment is demanding and system failure can be catastrophic, simplicity is a huge benefit. 

 

The usefulness of recent CMOS advances will probably be applied to things like distance surveillance and target identification and may never be suitable for infantry use. 


Edited by DavidWasch, 05 May 2020 - 02:57 PM.

  • Wildetelescope likes this

#16 Jeff Morgan

Jeff Morgan

    Voyager 1

  • *****
  • Posts: 13,472
  • Joined: 28 Sep 2003
  • Loc: Prescott, AZ

Posted 05 May 2020 - 04:19 PM

Nice analysis on the resolution Eddgie, I never considered it in that context. I can find no fault with it.

 

However, there is a tech I've been loosely monitoring that *does* have the capability to one day replace image intensifiers, at least for non-handheld uses.  Don't get me wrong, it's at least a decade or two away, but the underlying principal is what has me interested.

 

https://www.npr.org/...s-what-you-cant

 

 

 

Betting against digital electronics has been a losing bet for the last 50 years.

 

I do believe that one day silicon technology will become as sensitive and as real-time as NV. When that happens my credit card will come flying out of the wallet.

 

However, one can not predict when such a thing will be workable, let alone filter down to the consumer level. I could be dust by then. For the next waning moon window starting in a handful of days it does me no good.

 

OTOH, my NV unit is ready and waiting in the gun safe.



#17 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 05 May 2020 - 05:38 PM

Nice analysis on the resolution Eddgie, I never considered it in that context. I can find no fault with it.

 

 

Betting against digital electronics has been a losing bet for the last 50 years.

 

I do believe that one day silicon technology will become as sensitive and as real-time as NV. When that happens my credit card will come flying out of the wallet.

 

However, one can not predict when such a thing will be workable, let alone filter down to the consumer level. I could be dust by then. For the next waning moon window starting in a handful of days it does me no good.

 

OTOH, my NV unit is ready and waiting in the gun safe.

Thank you.

 

I do believe that one day there will be something non-analog that will supersede analog, but for "normal" hand held, that may be a very long time in the future

 

I would welcome a breakthrough that proved me wrong though.... 



#18 GeezerGazer

GeezerGazer

    Apollo

  • *****
  • Posts: 1,392
  • Joined: 06 Jan 2005
  • Loc: Modesto, CA

Posted 05 May 2020 - 06:15 PM

Intriguing thoughts; thanks Eddgie waytogo.gif ... and thanks Hoof, for the link to the QIS technology.

 

I would guess that there are closely guarded R&D secrets of which we are unaware, concerning both intensifiers and camera sensors.  What is known is only what we are permitted to know.  Sony's year old 500,000 ISO chips are but a link in the chain, and the chain keeps growing.  We have only to wait for what comes next!  I continue to look forward to those announcements.  In the meantime, I happily use my NVD and my phone. lol.gif



#19 Wildetelescope

Wildetelescope

    Mercury-Atlas

  • -----
  • Posts: 2,567
  • Joined: 12 Feb 2015
  • Loc: Maryland

Posted 05 May 2020 - 09:30 PM

I think the matter of intensifier tube simplicity will be a major issue, particularly for the military investment needed to scale up CMOS technology.

 

CMOS seeing is a combination of several complex subsystems (low photon recording, encoding, decoding and visual presentation), and to push the envelope, even more subsystems, like artificial intelligence, will be needed.

 

The analogue tube is so self-contained, it's pretty much the whole system in one piece, only needing some physical support (regulated voltage, optics, headgear -- all of which would also be needed by a CMOS system). Tubes are driven by materials science, which is more direct.

 

The more subsystems, the more opportunities for failure. When the environment is demanding and system failure can be catastrophic, simplicity is a huge benefit. 

 

The usefulness of recent CMOS advances will probably be applied to things like distance surveillance and target identification and may never be suitable for infantry use. 

This point cannot be stressed enough!  It’s simplicity is what makes NV attractive for what most people use it;-).   It is only us crazy astronomers that think about duct taping one of these things to a telescope eyepiece:-)  

 

cheers!  

 

Jmd 



#20 Eddgie

Eddgie

    ISS

  • *****
  • topic starter
  • Posts: 27,528
  • Joined: 01 Feb 2006

Posted 06 May 2020 - 07:23 AM

This point cannot be stressed enough!  It’s simplicity is what makes NV attractive for what most people use it;-).   It is only us crazy astronomers that think about duct taping one of these things to a telescope eyepiece:-)  

 

cheers!  

 

Jmd 

Yeah, I like that it is the size of an eyepiece.  Heck it is smaller than many. 

 

I know a lot of traditionalist astronomers think of it as an artificial view and look down on it, but I find the experience of doing NV astronomy to be pretty much the same experience as using a tradition eyepiece to view.  When I see things, I don't feel like I am using a camera and well, I am not using a camera. I feel like I am using an eyepiece.

 

To me, NV assist is a lot about the feel of it, and the feel of it agrees with me.   It feels like traditional amateur astronomy with no shutter adjustments and power cords and external displays. I carry out my eyepiece put it in my scope, turn it on, and enjoy myself. 

 

To me, the perfect replacement would have to replace the feel I get now as well as match the performance.  


Edited by Eddgie, 06 May 2020 - 07:24 AM.

  • ArsMachina likes this

#21 PEterW

PEterW

    Mercury-Atlas

  • *****
  • Posts: 2,670
  • Joined: 02 Jan 2006
  • Loc: SW London, UK

Posted 06 May 2020 - 03:37 PM

I am surprised as the quantum efficiency of back illuminated Sensor’s is practically 100% at peak, whereas I think intensifiers are around 30% (someone will correct me I am sure). Both have quite wide spectral sensitivity in roughly the same range. https://hamamatsu.ma...les/emccds.html
The noise on modern sensors is also virtually zero. I guess the pixel size trade off vs detected photons and then the amplification and near real-time processing capability affects what we can see and maybe the dynamic range of the display used (the evscope was selling itself on a high dynamic range display, the eye having a very large dynamic range). The clever things that phone software/AI can do to stack, HDR, choose the best bits of images, avoid image blur to deliver staggering results might work in “stacking” moving short exposures at night, as long as the motion was not too fast. The current night modes assume work for stationary views, wonder what they’d do if they allowed the view to change? (Intel movidius myriad processor can pack a big punch for very little power).
People have added intensifier tubes infront of sensors and also used high voltages to do avalanche in silicon amplification, but they add complexity and aren’t really needed if you can essentially detect all the incoming photons, faster optics might be useful, but these mean more glass.

Intensifiers are simple, compact, work very well, are robust are inherently real-time and don’t chomp batteries, what’s not to love? Of course there are more photons flying about to which they are blind (UV, NIR, SWIR,LWIR… for which other sensors are needed and the results to be combined for more potent use. It’s easy to hide from NV, not so when it can see your heat.

Not sure there’s a huge deal more to come, pixel size vs noise or diffraction will always come into play and spoil things. Intensifier gains are fairly modest given gen3 is almost 30yrs old, remember what mobile phones and digital cameras looked like then? Economics will also play a part, with more money to play with more gains are possible… witness mobile phones, consumer pull can be greater than military push.

Also some limits are not going to be got over by brains, they need to be got round… why do we have multi core processors rather than 50GHz ones?
Good find on the gigajot QIS photon counting detector, another tech I’ve come across is using quantum dots to allow far greater tunability of the camera wavelength sensitivity than is possible with old fashioned semiconductor bandgaps.
As mentioned the comments from people when they first look through reasonable NV is usually priceless, and often unprintable).

Peter

#22 PEterW

PEterW

    Mercury-Atlas

  • *****
  • Posts: 2,670
  • Joined: 02 Jan 2006
  • Loc: SW London, UK

Posted 06 May 2020 - 03:45 PM

https://patents.just...r/eric-r-fossum Seems to have a load published recently, watch out for the applications. Doesn’t seem to be short of ideas!!

Peter


CNers have asked about a donation box for Cloudy Nights over the years, so here you go. Donation is not required by any means, so please enjoy your stay.


Recent Topics





Also tagged with one or more of these keywords: NV



Cloudy Nights LLC
Cloudy Nights Sponsor: Astronomics