SANCTUARY: NAVIGATING AND INTERACTING IN VIRTUAL REALITY
--

One of the current challenges with VR that needs to be overcome is how users physically interact with the worlds they inhabit. At the moment there are many people working on an array of input ideas, from infra red sensors, body tracking, eye, mouth tracking, keyboards/mice, controllers and even big treadmill-like contraptions you strap yourself into. In all of these you have to take into account how the users will navigate in the various worlds and how they interact with the user interface to complete tasks we would otherwise find easy in real life, like ‘clicking’ buttons, communicating etc.


I have an idea for how we could possibly create hardware that could allow for easy navigation of a virtual world like Sanctuary (It probably wouldn’t work well for fast moving games). It would combine a few different technologies:


- Head tracking

- Eye tracking

- Brain computer interfacing

- Speech recording/processing


The goal would be to have user interaction that feels as natural as normal reality. You still don’t get the haptic feedback, but it’d be more akin to having a dream or how we generally traverse our inner thoughts.



REMOVING THE MOUSE/KEYBOARD & CONTROLLERS


What are the main aims of a mouse/keyboards and controllers? Well, for the mouse you have the ability to move and 'look’ about. You also have buttons to press which help state our intentions. The role of the buttons usually indicates “yes, please do this”. The keyboard itself allows for both navigation and input (usually communication) and requires visual cues for most people. Then we have the controller which is just a simplified version of both of these pieces of hardware. What if we could get rid of all that and just use our heads instead?

- Spacial navigation could be achieved by both head tracking and brain computer interfacing. You could 'teach’ the brain sensors to follow a simple set of commands, like go forward, go back, stop, 'press’ button etc. For example, if the user wants to move around, all they’d need to do is physically look in the direction they wish to move, and think about going forward.

- Interaction with menus would require both rethinking the design of user interfaces as well as incorporating eye tracking and/or brain computer interfacing. I can envisage menu systems as being mostly heads-up displays, that is, icons or other menu items that are overlayed ontop of the virtual worlds, so they are always available. All the user would have to do is look at a specific menu item, think about 'clicking’, or perhaps blink twice quickly. This would replace the left click “yes, please do this” you get from a mouse. You could also have a set of different eye/blink combinations that do various things, such as initiating voice chat, closing all menus etc.

With a combination of head/eye tracking and brain computer interfacing we could possibly replace the functions that a keyboard and mouse can provide.



COMMUNICATION

So we have some ideas about how you could possibly navigate and interact, but how would you communicate with other users in a natural way if we don’t have a keyboard?

Brain computer interfaces aren’t good enough to translate pure thought into text just yet, but we can use the eye tracking ideas mentioned previously, along with something we all do very naturally, speak.

Imagine having a simple microphone icon available at all times on your HUD, and whenever you want to speak, you just blink quickly in whatever combination you’ve set up and talk. The speech could either be sent as audio, or automatically converted to text and sent that way.

If looking at an icon takes too long, you could alternatively have a special blink combination or something you can do quickly (maybe looking all the way up or down) to initiate or stop voice recordings. The same could be done for the brain computer interface.

As we’ve seen with Skypes latest app showing how you can translate voice data into different languages on the fly, this technology is already around.



A SMALL PACKAGE

Most of, if not all of what I mentioned above is available to use today and I think the key will be to include it all in one small contained package. For example the head and eye tracking could be contained in the goggle area of the headset and the brain sensors could be in the headband. I can imagine in the not too distant future that smartphones will play more of a role in VR, removing the need for computers and wires as they get smarter and more powerful. All you’d need to do is strap it on your head and that’s it! No accessories or other nonesense.



––
BY CHRIS ROBINSON