First, some of this will be funny if you are super nerdy or geeky I think. But maybe not. I mean I ain't British, with their highly wit, though I do think I have the charm and refinement parts down. I drink my Gatoade shaken, not stirred, and with a loquat twist when the squirrel kicks some out of the tree when I am out on the patio in deep thought, which has to happen fast before the caffeine kicks in and I want to get up and floss dance.
Last night, I watched a Youtube video where the poster speculated that advanced sensors like the Sony A7 would perhaps rival image intensifiers in the next generation.
On that point I have serous doubts and this post can serve as a way for me to express those doubts and have a dialog about them if my reasoning seems to be flawed, which is possible, but to my own way of thinking, doubtful because I just finished a "Gatorade." Wink wink, nudge nudge....
Now immediately into the video, the author conceded that current sensors were not nearly as efficient and sensitive as image intensifiers. I think that while the next generation of sensors might be better than they are today, I seriously doubt that huge gap that exists in sensitivity will be closed to any meaningful degree which would allow the sensor to come close to competing with an image intensifier in terms of low light performance in a real time image.
Let's take the example of the SiOnyx, which is one of the most sensitive sensors currently made due largely to its very broad spectral response. I have seen images taken with these sensors that did not have as good a resolution as my Mod 3 (much more on that) and were taken using 1.5 seconds of exposure. Now let's really really measure how far away from an image intensifier this really is.
The human eye needs a frame rate of higher than 20 frames a second to appear smooth to the observer and 24 FPS is the common frame rate often used in something like aerial drone footage. A 24 FPS frame rate, which would be required to preserve smooth motion at the very low slew rate would equal a shutter speed of slightly longer than .04 seconds per frame. Now if the SiOnyx requires 1.5 seconds to get a view of a single frame that has the smootness of the real time view in an image intensifier, the sensitivity would have to increase 45 fold. For live image reproduction then, it is highly unlikely that silicone based sensors will achieve this level of sensitivity for many sensor generations, if ever.
Next is resolution. I often see this is a major advantage of digital sensors, but is it really???
Let's explore this more fully. A modern image intensifier with 72 line pair per millimeter is readily available today. Now what does that really mean? Well, the photocatode of a modern image intensifier is about 17.7mm in diameter (I have heard 17.8 and 17.7, but let's be conservative and give the Sony sensor a break and go with 17.7mm).
I line pair is two lines. On line is white and one line is back. that means that 72 line pair per millimeter is not 72 lines, it is 144 lines, 72 white lines alternated with 72 black lines, so again, a total of 144 lines per millimeter. Over the full diameter of the 17.7mm circle, the image intensifier can display a staggering 2548.8 lines.
Now, lets move to the Sony sensor. The sensor has a video resolution (and real time viewing is what we are talking about so not max resolution that one could use in a very long exposure image with is as far from real time as one can get) of what sounds like an impressive 3840x2160 pixels. Hey, Ed, that sounds a lot better than the 2548.8 lines of the image intensifier! Well, no, it isn't even close. See, to make a fair comparison, we have to see how many lines we could put in the same size circle as we could using the image intensifier.
Now the 2160 pixel count is the number of horizontal lines on the sensor and the 3840 is the number of vertical lines. Let's start by using the 23.9mm height of the sensor. With 2160 lines from top to bottom, if we were to cut a 17.7mm circle out of that sensor, we would find that in 17.7mm, the line count would be reduced to 74% of the number in the full height of the sensor, and now we are left with only 1598 lines. What this means is that if you were to make the sensor the exact same size, a 17.7mm circle, and put it into a housing of the PVS-14, the resolution would be considerably less than that of the image intensifier.
The argument is going to be "well, the sensor is a lot bigger" and yes, it is, but to make the device small and portable, it is impractical to have a very large sensor. Le'ts explore that further.
In recent post, we examined the vignetting characteristics of the modern Gen 3 night vision device. We know that the fully illuminated circle size of the photocathode is only about 10mm at best when using the Mil-spec objective. Now we could change the objective to make it fully illuminate the full diagonal of the A7 sensor or even a smaller area of the sensor, but to do so, the flange would have to be quite large and the lenses would be hugely bigger, heavier, and more complex than the simple lens of the PVS-14. In other words, to get a well illuminated field over the entire sensor, the device would have to be the size of oh, let me think about that.. Ah, I am trying to think of something we can all relate too.. Ok, I know, it would have to be about the size of a Sony A7 Camera and to get the same speed, the objective would have to be the size of oh, a big hunk of SLR glass.
But it gets worse. Now, we have to be able to view the screen in real time when it is attached to our helmets. Let's say we put an Apple Watch display at the focus of our Mod 3 or PVS-14 eyepiece. Today, the Apple watch has a pixel density of only "only" 326 pixels per inch or put into PVS-14 screen size, 241 lines, or 13.6 lines per millimeter!!! There is no output screen today even close to having the resolution that would allow for packaging in a hand held and helmet wearable display!
Ok, let's be creative. Let's use our PIMAX 4K Gamer headset. Now we are getting 1920 x 2160 resolution and while the resolution is not as good as the image intensifier, the huge apparent field is a compelling benefit but in the end, you have a two pound camera/power supply on your helmet and a Cell phone stuck four inches in front of your face. Now putting it on your helmet is not the best idea because of the parallax error. Hint, hold your rifle up in front of the lens.. Otherwise, just figuring out the parallax error and whether you should aim high or low will hurt your head. When you do this, make sure you put a pillow in front of your face because even an unsupported M4 rifle will probably have enough recoil to plane the buttpad into your nose.. See, I do think a lot about stuff like this..
But let's streamline it and make it a monocular. So, half of your PIMAX 4K (no 3D for you!) with a Sony A7 stuck to the front and a couple of good high capacity batteries and you still have to wait a second and a half to see what we can see today in a Mod 3 in real time but you do get that big apparent field that everyone lusts for. (You can have your Stereo back if you add a second Sony A7,, That'd be cool.. I would look soooooo freaking awesome if I did not get arrested walking around the neighborhood at night while wearing it!)
And this is the most pressing argument for not expecting to see the Gen 3 image intensifier go away in a very long time. The US Government would surely know about any defense contractor pursuing a breakthrough in imaging technology and would probably be funding part of the research if it looked promising. And if they saw something on the horizon, they would not be putting out Omni IX contracts. No, there is nothing out there that is going to replace the Gen 3 technology in a hand held or wearable device in even the mid term future unless some incredible breakthrough occurs.
Now for mounted use, that is a different story. If you abandon the need to keep it hand held an instead, say it just has to fit inside of a tank where the driver will watch where they are going on a IPad from his fighting bunk in the barracks, well yeah, that may happen in a three or four generations of Sony sensor if they could live with 15 frames per second of video feed.
Arial applications also will be able to use advanced sensors where signal processing computers can upscale the resolution and smooth the frames, but the graphics cards alone for this would consume more power in a minute than a PVS-14 consumes in a month. Imagine having to velcro a NVIDIA TITAN RTX graphics card on to the back of your helmet, though that would help counter-balance the big SLR lens, Sony A7 size sensor housing and PIMAX 4K monoclar (at this point, should we just go full binocular with an Apple 10 phone and a cardboard 3D holder to save weight??? If so, just make sure you are in Airplane mode before going on patrol. Getting a text from your GF during a fire fight would be a good way to move her along to her next boy friend. Sad for you though.. LOL,)
So, I am not saying that sensors won't get better and won't find more uses, but what I am saying is that the gap between using an all digital hand held device that can show 24 frame per second video at the illumination levels that image intensifiers can work with, barring some sudden monumental breakthrough, are probably decades away. The sensors will have to have orders of magnitude increase in sensitivity and density and graphics processors (because it is unlikely that the frame rate can be achieved by sensitivity advancements alone) and the output display technology will have to be orders of magnitude more dense or it will be impossible to match the form factor and performance of even the current generation of image intensifiers.
Now this is me looking into my very cloudy crystal ball. And maybe some of my math or logic is faulty and I expect the forum members to correct me if that is the case.
I make no apology for those errors or faults. This is just convergence of many things we discuss in the forum and I see elsewhere and I like to think about things like this to keep my mind from turning into a social media junkyard. I love image intensifiers and I think a lot about them, and form time to time people ask me if there will be a breakthrough and that has caused me to consider the obstacles of simply buying a Sony A7 chip and trying to mount it in a Mod 3 and making it work in a telescope for real time observing.
Now I could just stick my SiOnyx camera in my focuser and wait 1.5 seconds, then download it in my computer and watch the image in its full 720P glory, but if I were going to do that, I would just buy a Revolution Imager and wait 8 seconds. Walking anywhere with it on my bump helmet would be either slow or dangerous or both, but I could make jerky, streaky time lapse of my plunge off of the cliff to the superhero style landing, which I have studied hard, and think could replicate.
So, I look forward to any dialog or discussion this might generate. Maybe there are ways to do it that I am to slow to have envisioned, but I have tried mighty to figure out where we would have to be to come close to the real time performance of modern tubes in a compact device (not even considering battery life) and this is the best I could do which is to say that it is a daunting problem that I do not think can be easily achieved in the course of a single generation of sensor evolution, and maybe not even in several. It is a very difficult problem.
While we think of Gen 3 as "old" technology, the fastest operational airplane ever flown was built when I was a kid and nothing today is faster. The most powerful rocket engine ever made was built when I was a kid and nothing today is more powerful. Who is to say that the Gen III image intensifier won't be around for a very long time when were we sit today, the gap is very large.