Microsoft Research Uses Low-Cost Hand Tracking To Interface With Emerging VR Worlds
We're finally starting to see the world of virtual reality unfold to an eager and (so far) receptive audience. While most of the attention to this point has been on gaming, the technology itself is the most interesting part, especially since we're still at a relatively early stage of VR. One of the things to keep an eye on is the evolution of hand tracking, which Microsoft believes is nearly slick enough for mainstream use.
"How do we interact with things in the real world? Well, we pick them up, we touch them with our fingers, we manipulate them," said Shotton, a principal researcher in computer vision at Microsoft’s Cambridge, UK, research lab. "We should be able to do exactly the same thing with virtual objects. We should be able to reach out and touch them."
Microsoft's ultimate (and broad) goal is to develop solutions that will allow us to interact with technology in more natural ways than ever before, and in the world of VR, that means using your hands even for small, sophisticated movements like picking up a tool or pushing a button. It's just as important in the grand scheme of things as visual cues and speech recognition.
"If we can make vision work reliably, speech work reliably and gesture work reliably, then people designing things like TVs, coffee machines or any of the Internet of Things gadgets will have a range of interaction possibilities," said Andrew Fitzgibbon, a principal researcher with the computer vision group at the UK lab.
Accurate hand tracking without using up enormous computing resources is easier said than done. Hands are fairly complex and flexible in what they can do, such as rotate completely around and fold up like a ball into a fist, the latter of which means the fingers disappear (from a VR standpoint). To help with that, Microsoft's computer vision team is developing a technology based on an algorithm that dates back to the 1940s, a time when vast computing resources weren't readily accessible.
"The system, still a research project for now, can track detailed hand motion with a virtual reality headset or without it, allowing the user to poke a soft, stuffed bunny, turn a knob or move a dial. What’s more, the system lets you see what your hands are doing, fixing a common and befuddling disconnect that happens when people are interacting with virtual reality but can’t see their own hands," Microsoft explains.
The project itself is called Handpose and it relies on a wealth of basic computer vision research. It's also uses a camera to track a person's hand movements, something that's been done before but in this case it's been designed to accommodate much more flexible setups. For example, a user wouldn't have to sit at a desk—he or she would be able to get up and move around a room while the camera tracks everything from zig-zag motions to thumbs-up signs, all in real time.
Microsoft says this type of technology could eventually be used by everyone from law enforcement officials direction robots into dangerous situations to office workers who sorting through email or reading documents with a few flips of the wrist (instead of mouse and keyboard input). It could also be used for more creative tasks, like creating art or making music.