It sounds unbelievable, but maybe we will see cameras with quantum efficiencies greater than 100%.
That won't help all that much for planetary observing - the main limit we've got now is shot noise, which is actually independent of which camera is used. This is also why the notion of 1000fps cameras is a little misleading, except for the very brightest of subjects (the Sun, Venus perhaps) there's just not enough light to get images with low enough noise to be useful at these framerates - there'd be so much noise it would be impossible to identify the sharp frames. Practical limits seem to hover around 200-400fps for Mars & Mercury, 200fps for Jupiter and perhaps Saturn and significantly lower than that for Uranus/Neptune.
But when you get into high-resolution deepsky using lucky imaging, there's still a lot of scope for improvement as there read noise starts to dominate the stack - currently best-case read noise is around 0.7-0.8e- (which is a *huge* jump from 10 years ago) - once we get that down to 0.1e- or lower, we're essentially counting the photons - having QE above 100% would allow you to get away with a little more read noise.
Once you're accurately counting photons, in the signal-to-noise ratio there's effectively no difference between 1000x1s an 1x1000s exposures so as long as you've got something in the FOV that's bright enough to align your frames on and estimate their sharpness, you'll get a sharper result with similar noise levels using many short exposures. (Of course, since deepsky cameras tend to be multimegapixel affairs rather than the tiny ROI's we record for planets, the data-volume to process quickly becomes overwhelming - but live-stacking with instant sharpness estimation could provide assistance there - say live-stacking 1000 1s frames but only stacking the frames above a certain threshold - you'd probably still want to acquire subframes for later stacking in software, but each subframe would consist of x actual frames live-aligned and live-stacked and autoguiding would be done on the actual image, rather than using a separate guider) - of course exposure times would be longer than the final exposure time as you might take 2000 seconds to reach the 1000 stacked seconds threshold, but if sharpness is significantly improved by doing so, you'd get cleaner, deeper images with less exposure and less need for post-processing.
These techniques are already being used, but the read-noise greatly limits their effectivity.