Virtual reality is the stuff of programmer legend. Every software engineer that’s ever read Snow Crash (or more recently, the excellent Ready Player One) has dreamed of jacking into the metaverse. But why now? Well, if you think of it in very coarse terms as strapping two smartphones on your face and writing clever glue software, modern consumer VR is a natural outcome of what Chris Anderson calls the “peace dividend of the smartphone wars”.
It really is another of those things: if you think “eh, that’ll never happen,” just wait twenty years.
(As a pedant, I should point out that the Oculus, like the 2DS, uses a single smartphone with a line drawn down the middle. Two screens was an economical move in the DS vs PSP days—but that was before the smartphone wars!)
Carmack sums up his thinking on latency, particularly in a VR context:
If large amounts of latency are present in the VR system, users may still be able to perform tasks, but it will be by the much less rewarding means of using their head as a controller, rather than accepting that their head is naturally moving around in a stable virtual world.
All the parts conspire: LCD displays are slow (and TVs are worse); parallel processing lets you draw more frames at once but with each frame taking longer; input devices trying to be helpful hold on to input for a few ms to smooth it out.
His big insight is that the most important latency is between when you move your head and when the ingame camera updates and renders, so you can split off that part of the input from the moving and shooting parts and keep pushing it further forward in the frame. At Quakecon he talked about sampling it again after you’ve simulated the frame but before you start drawing, but now he’s talking about how even after you’ve drawn the frame there’s depth info in the pixels so you can skew them to match head position just before or even while you’re displaying the final frame to the user. Is that even possible? My brain hurts.