Latency Mitigation Strategies

Carmack sums up his thinking on latency, particularly in a VR context:

If large amounts of latency are present in the VR system, users may still be able to perform tasks, but it will be by the much less rewarding means of using their head as a controller, rather than accepting that their head is naturally moving around in a stable virtual world.

All the parts conspire: LCD displays are slow (and TVs are worse); parallel processing lets you draw more frames at once but with each frame taking longer; input devices trying to be helpful hold on to input for a few ms to smooth it out.

His big insight is that the most important latency is between when you move your head and when the ingame camera updates and renders, so you can split off that part of the input from the moving and shooting parts and keep pushing it further forward in the frame. At Quakecon he talked about sampling it again after you’ve simulated the frame but before you start drawing, but now he’s talking about how even after you’ve drawn the frame there’s depth info in the pixels so you can skew them to match head position just before or even while you’re displaying the final frame to the user. Is that even possible? My brain hurts.