Bringing Virtual Reality into Focus
Warped focus turns fuzzy images into sharp details
Northwestern Engineering PhD student Nathan Matsuda has developed a practical solution to a problem that has plagued virtual reality systems since the beginning. By using custom-designed algorithms and warped-focus screens, Matsuda and his collaborators have discovered a way to bring objects at different depths seamlessly into focus.
Matsuda, who conducted the research while working as an intern at Oculus Research, will present this work at the annual SIGGRAPH Conference in Los Angeles from July 30 to August 3. Oculus Research is the research arm of Oculus, which produces the virtual reality Rift headset and Touch controllers and powers the Samsung Gear VR mobile headset.
In the real world, as your eyes gaze around a scene, they seamlessly switch focus from one object to the next — regardless of whether the objects are near or far away. Any object that the eyes are not currently focused on are naturally blurred into the background. This ability to switch between foreground and background objects is so natural that people often don’t realize their eyes are doing it.
While virtual reality systems are undergoing fast and groundbreaking improvements to provide fully immersive 3D experiences, they are still unable to switch their focus as effortlessly between near and distant objects. They also cannot blur objects that are not the main point of focus, making it difficult for users to concentrate on one object at a time.
“VR headsets can give you an idea of 3D space, but the focus is always fixed at one plane,” said Matsuda, PhD candidate in Professor Oliver Cossairt’s laboratory. “Because current products are at a fixed focus — regardless of the apparent stereo depth of a scene — the actual image presented to each eye comes into focus at the same distance at all times.”
The VR headsets encompass both a flat screen and a magnifying glass, which makes the screen feel larger and further away. As users tilt their heads, the VR software adjusts the images accordingly.
Matsuda and Oculus Research scientists Alexander Fix and Douglas Lanman discovered that by warping this screen, they could turn near-field images from soft to sharp. By warping the focus of the screen to conform to the 3D content, the team’s solution allows users’ eyes to follow the contour of the scene in a natural way, pulling sharp details into focus.
To design a warped-focus screen that could be effectively applied to the highest number of potential scenes, Matsuda and his collaborators developed a general-use optimization algorithm. Their solution employs two screens: the headset’s original screen that displays the scene and the warped-focus screen, designed with the optimization algorithm. When aligned inside of the VR headset, the screens work together to ensure that the eyes see a flawless image.
Not only does Matsuda’s solution provide a better experience for the user, it also releases constraints on what developers can do.
“Of all the things developers could possibly do in virtual reality, they are limited to a subset that works with a fixed focus,” Matsuda said. “Ultimately, we want to march closer and closer to finding the optical fidelity that you expect from the real world, so that people who create experiences in virtual reality can do whatever they want and not be limited by hardware.”