Virtual Reality has been a geek dream for decades. If you haven’t been following closely, you might have missed the fact that all the technology needed to make it happen is here, right now. It’s not years off, it is being used right now, and it will be available to consumers in the coming months. To be honest, I haven’t been so excited for a gaming phenomenon since I rode my bike to Babbage’s to pay $6 for a few floppies that had Doom Shareware on them. How did this happen? After years of talk and marketing, we’ve made some modest inroads with 3D movies, and yet, all of a sudden, we have virtual reality. People are building it, and it is affordable.
Welcome to the desert of the real.
At QuakeCon a couple of weeks ago, John Carmack talked at length about the state of his work on various fronts, including spending a good portion of his talk discussing his work on virtual reality. The talk itself was over three and a half hours long, and, in typical Carmack fashion, had no visual aids or demos that the media could use to capture his thoughts in a sound bite. The rewarding thing about watching an entire Carmack talk is that he talks in detail about the technical aspects of what he’s working on, so you can actually learn a lot about the state of the art by listening closely. Unfortunately, due to the format of the talk and the length, few have the patience to sit through the entire thing, much less understand it. I broke the talk up into three sessions and just finished it a couple of days ago. I’m working on an article that distills some of the major technical points that Carmack made during his keynote because much of what he said was about landmark shifts in the gaming industry. As a sort of primer, however, I wanted to touch on one point about virtual reality and the nature of the gaming art form that lends itself so well to this new technology. If you’re a long-time gamer, you will already know everything I’m about to say, but if you are only aware of gaming from the periphery, this might lend some credibility to the notion that we have VR now. The short version: there are a lot of reasons to believe we’re on the verge of a major gaming revolution, and that revolution will occur in the next 12 months.
One of the hallmarks of VR is that it presents an immersive three dimensional world to the user. I will discuss the “immersive” part in my next post, so I will focus on the 3D part here. If you’ve noticed, other areas that try to do 3D have had only modest success, specifically: 3D televisions and movies. It is true that photos have had 3D for a long time as well, and that has yet to go mainstream, but I’ll be focusing on movies for this post. Sure, 3D movies are out there and people will pay a 30% premium for them. But the quality varies a whole lot. The quality can vary because of poor source material (e.g. filming with only one lens and then adding depth maps to each frame to approximate the data needed for 3D) and because of poor viewing equipment (current 3D TVs are expensive, and still suffer from ghosting, for example). Even if you have great source material and really good display technology, 3D presents a data challenge: you have to push twice as many pixels for a 3D image as you do for 2D, and in the realm of movies, that means roughly twice as much source data store on a disk or streamed over a network. In short, the movie industry has to re-tool their whole methodology for making and distributing movies to support a switch to 3D. Movies have to be filmed with a 3D rig, store twice as much data, create both 2D and 3D versions for distribution, re-tool all the theaters, sell 3D TVs to support viewing, develop new media formats and data streams that support the 3D view, sell players that support the new format, and sell the entire idea to customers as something they should be paying for. They’ve done a lot of this work, but it’s all still very much a work-in-progress, and making a 3D movie well is expensive, so the costs have to be built into the budget from the beginning.
So, given the modest uptake of 3D in the realm of films, why would VR take off in the realm of video games? After all, it also requires new hardware and it also requires support from the games themselves!
In my opinion, it goes back to a fundamental decision John Carmack made when he designed first-person shooters in the mid-1990s. Unlike other games of the time, Wolfenstein started down the road of modeling the game world in three dimensions. Never mind that no user had anything other than a 2D screen to view it on, the basic geometry of the game world was modeled in a simple version of 3D.
Wolfenstein’s “engine”, (a word for the program that takes all the art assets and draws the screen for the player) was a raycaster that supported wall blocks and X and Y coordinates, and had a fixed height for the Z coordinate. All the floors were at the same level, so there were no stairs or platforms in Wolfenstein.
Doom’s engine took this a step further, and allowed floors to be at variable heights in the world, as well as allowing areas of the level to change height during gameplay, allowing for “bridges” to rise out of slime and walls to lower. But it still didn’t support arbitrary three dimensional world models (like bridges going over moats, for example).
By the time the Quake engine was developed, every element in the game world had coordinates in 3D, including floors, walls, in game decorations like torches, enemies and even other players, and “real” 3D was supported.
This way the engine was designed meant that when a player jumped, the computer tracked the position in all three dimensions, so that the position of the player’s eyes was always precisely known (along with what direction the player was facing). This meant that even back in the mid-1990s, players were essentially driving a camera through a modeled 3D world, the computer constantly evaluating where the camera was and drawing the view from that camera accordingly, projecting the result onto the 2D screen for the player. This was a major decision that affected the underpinnings of how content was created and modeled within the game engine itself, and it was designed to aid in the consistent development of credible game worlds. The revelation was that the simplest way to provide an arbitrary view of an object that can be examined from multiple angles is to model it in 3D. As the player examined it during gameplay, the engine would simply generate the appropriate view from the 3D model on-demand. This technique was so effective that just about every game since has been influenced by that decision, which has fueled the market for 3D cards that provide hardware support for common mathematical operations needed to render such worlds.
This approach pays off when building VR. We’ve built games around the idea that we feed the game engine a location in the game world where the player’s eyes are and the direction the player is looking and the game engine draws the world from that perspective. Sure, historically, we said that the player has really only one eye, and the world view is displayed on one display a few feet from the player’s face. But as soon as you consider VR, it is easy to realize that all the source material needed for true virtual reality is already there. At the highest level, you simply request that the engine render another view from the perspective of the other eye, and then you pipe both images into a head-mounted display that provides an image to each of the player’s eyes. This approach is enough to generate a true 3D image for the player. In motion, however, there are a bunch of other requirements to create a truly immersive experience: you have to have sufficient resolution on the displays, you have to support head tracking, both in terms of location and “pose” (the direction the player is looking), and you have to be able to have the view respond to the player’s head position extremely quickly (sub-50 milliseconds). So there’s still a bit to discuss about how those issues are being addressed. But that’s for my next post, in which I will attempt to summarize some of the points John Carmack made in his QuakeCon keynote about the technical implementation of virtual reality.