I am going to follow up my last post with something that really never gets mentioned when people post pictures as well.
The NVD Intensifier chemical/mechanical sensitivity is to be commended highly because intensifiers do not have exposure times. They have refresh rates for the phosphor output window and refresh rates are fast enough to allow real-time viewing of objects. There is no exposure adjustment but instead a gain adjustment - even on fixed gain tubes. On fixed gain tubes the adjustment screw is out of sight from average user for a good reason.
The eyeball is close to the same behavior in refreshing organically and no exposure “time” adjustment, but natural methods of aperture adjustment and certain chemical/organic reactions can be optimized for night viewing.
The part that never gets mentioned on NV photos and might be misunderstood by someone unfamiliar with how intensifiers work is that any camera attached to take pictures of the output window phosphor screen is using an exposure time to only get the details of what is displayed on the phosphor screen as it is refreshed.
Meaning, if there was a way to get a true one to one translation of equivalent camera exposure to the human eyes sensitivity at its organic refresh rate, then we have a true accurate representation of what is viewed at the eyepiece with NV.
We will call human eye visual use of NV, NV-Visual for the sake of this post and camera + NV and exposure time we will call NV-Camera for the sake of this post.
When someone using NV-Camera posts a picture and says the exposure time was 15 seconds or 30 seconds, that is 15 or 30 seconds of exposure to what is on the phosphor screen. The information is already there and set by the intensifiers sensitivity and input system but the Intensifier itself never gets more information by exposing longer or staying longer tracking an object. Only the camera benefits by exposing longer to pick up the details already present on the Intensifier output window.
The NV-Visual user has no such time adjustment in exposure, just like the Intensifier tube itself. It is a fairly fixed refresh rate with some natural adjustment in aperture by way of pupil size and depending on if the image represented has a high enough signal to trigger rods or cones or a mix of both. The only way to adjust this system of NV-Visual is to adjust the input side and optics incoming to the Intensifier.
Someone using an NV-Camera system can try to approximate the NV-Visual users experience but will be guesswork or processing and guesswork to approximate the view since there is no translation formula on exposure time to an NV output screen to universally translate to human eye restrictions or sensitivity range for an average human response.
When I see 10 second, 15 second, 30 second exposures and a great amount of detail seen and little noise seen, I have to wonder if that is really what that particular camera system’s accurate one to one translation is of the average capture by a human eyeball. Some people use NV-Camera EAA and never mention trying to duplicate what NV-Visual is seeing. This is a great approach because of avoiding the hassles of inaccurate translations to NV-Visual experience. I believe jdbastro has always done his NV-Camera pictures this way. He never says “this is what the visual looks like” as far as I can remember back. He exposes to the amount of time that still keeps him from getting frustrated and so it’s not feeling like it’s AP, not to duplicate the NV-Visual experience or represent the NV-Visual experience seen by posting pictures on the forum. There is an obvious similarity to visual use though. Just not as much detail seen visually and he gets smoother images by low iso combined with the more detail from the longer exposure. Great stuff and all photos done are truly a testament to the great sensitivity of the Intensifier and its ability to amplify the signal.
So here I will point out that if exposures are getting longer than 1 second or brightest stars are blowing out a certain area of the photo, is that really what the visual looks like? Maybe the camera system’s one to one translation is 2 seconds, maybe it is 1/5th of a second - the point is we have no data on that and cameras and eyeballs can both vary greatly in sensitivity and how they respond to the entire input configuration.
For someone unfamiliar with intensifiers, they might assume that you can expose in lengths of time with just the Intensifier or see a photo done in 30 seconds with one NV-Camera system that in no way could be duplicated by NV-Visual using the same exact input side configuration, but using NV-Visual rather than NV-Camera on output side.
Some other ways that NV-Visual can benefit is bringing the signal strength of the object up and brightness or detail up. Now an NV-Camera will obviously benefit as well by needing less exposure time to match NV-Visual, but comparing an NV-Camera system at F/3 with a 10 second exposure using a 12nm Ha filter, to an NV-Visual system at F/2 or F/4 or any different f ratio using a different width narrowband filter is in no way going to accurately compare. We already do not have a data set of true translation for NV-Camera to NV-Visual is settings used, so no perfect translation exists.
All NV-Camera photos claiming “this is what the visual looks like” are just approximations of what something might look like in the NV-Visual as interpreted by the author and photo poster.
This 100% applies to me posting photos and trying to match them to my visual experience. I can, at best, try to get an approximation and it has to be trusted every bit as much as my written descriptions because of the drastic differences in systems collecting the information at the output window side.
I thought is important to point out the different systems and what they are actually collecting from for the casual reader of these forums as I have seen Intensifier technology misunderstood by many on the forum, including some large amounts of misunderstanding by myself when first using them. One of the reasons I urge people to read as much as they can about their devices to get the most out of them and understand what they are doing. It’s an ongoing learning process like all knowledge on optics are for me.