NVIDIA To Demonstrate Foveated Rendering Tech To Reduce VR Workloads, Mimic Human Visual Systems

Doing virtually reality (VR) well is very hard. It requires significant compute resources to render immersive VR worlds with the kind of fidelity, latency, and framerates necessary for users to feel truly connected. Whereas a typical PC game may feel smooth and immersive at a paltry 1080p resolution at 30 or 60 frames per second, VR headsets like the HTC Vive and Oculus Rift have displays with a 2160 x 1200 combined resolution and 90 Hz refresh rate, and ideally, the systems they're connected to need to sustain framerates of 90 frames per second. As VR continues to evolve, and resolutions and fidelity improve, the amount of compute resources required to render those frames fast enough increases exponentially.

smi headset
SMI Modified VR Head Mounted Display - Notice The Sensors Around The Optics

However, because VR is so demanding, many companies are looking into ways to improve efficiency and minimize workloads, without negatively impacting perceived quality. To that end, NVIDIA is partnering with a company called SMI (SensoMotoric Instruments) that is developing sophisticated eye-tracking technology, to demo future-looking VR rendering techniques that could potentially reduce pixel shading workloads by 3 – 4x. NVIDIA is calling the technology perceptually based foveated rendering.

Human vision has two different components. There’s peripheral vision, which is a term most of you have probably heard before. Our peripheral vision has a wide field of view, but typically lacks acuity. When you see something “out of the side of your eye”, that’s your peripheral vision. And then there’s foveal vision, which is concentrated at the center of our gaze and is sharp and detailed. Peripheral vision is good for picking up things like movement and flickering throughout a wide field of view, but it's not good for deciphering fine detail. Foveal vision has a much narrower field of view, but allows us to clearly focus and pick up on those fine details.


NVIDIA is using these traits of human vision, along with SMI’s low-latency eye-tracking technology, to develop a foveated rendering technique that mirrors how humans see the world. The aim with perceptually based foveated rendering is to use eye-tracking technology to discern where a user is looking in a particular scene, and to render that area with full resolution, while simultaneously reducing the resolution of the areas of the scene in the user’s periphery. Doing so reduces the overall rendering workload of the scene, without detracting from the experience. The compute resources that are freed up can then be used by developers to either squeeze more performance from the available hardware or to increase the quality of the visuals concentrated in the fovea.

Foveated rendering is not a new idea. Research into foveated rendering has been going on for years. But the techniques and technology NVIDIA is using are new and aren’t incorporated into current-gen GPUs or game engines. NVIDIA’s perceptually based foveated rendering technology is being worked on by company researchers at the moment, but it is very likely to make its way into professional and consumer applications at some point in the future; we’re told interest is very high.

stock rendering
Unaltered Rendering

blur only rendering
Foveal / Peripheral Vision -- Foveation

contrast preserving rendering
Contrast-Preserving Foveation

The images above illustrate what NVIDIA is doing, but we also suggest watching the video. Imagine looking through a VR headset and focusing your gaze on the clock at the upper-right (in the red brackets). SMI’s eye-tracking technology monitors your gaze (at 250Hz) and tells the engine that the clock is in your foveal vision. NVIDIA’s algorithms then render the area in your periphery at a lower resolution and gradually blend the areas together.

NVIDIA isn’t simply rendering the periphery at a lower resolution, though. Lowering the resolution alone would result in stair-stepping and artifacts that would be perceptible in the periphery, especially when in motion. Those artifacts can be minimized with blurring, but NVIDIA found a basic blur caused a loss of contrast and a tunnel-vision like effect. So, in addition to rendering at lower resolution and blurring, NVIDIA developed a contrast-preserving post-processing effect that makes the lower-fidelity in the periphery imperceptible to the user.

NVIDIA will be demoing perceptually based foveated rendering using SMI’s technology incorporated into an HTC Vive head mounted display at SIGGRAPH next week.