Matthew Conway, Randy Pausch, Rich Gossweiler, Tommy Burnette,
Conference companion on Human factors in computing systems. pp 295-296,
ACM, 1994.
Back in the 1990's, the University of Virginia, under Randy Pausch,
acquired two 100,000.00 SGIs to create virtual environments. We used
magnetic polhemus trackers to know where your head and hands were and a
very heavy head mounted display. The lenses had to sit so far out front
that a sandbag was mounted on the back to counter-balance the gear.
I wrote DIVER (DIstributed Virtual Environment Rendering system) in C,
based on early versions of openGL. Since we used two sgis, one for each
eye, It used udp sockets to communicate updates the the scene graph and
syncs for rendering.
DIVER departed from the Inventor scene graph with its own model, allowing
any object to dynamically reparent under any other object without suddenly
jumping in space due to the newly inherited parent matrices. DIVER
supported the idea of multiple cameras, of a rendering loop that was
independent of a scene update loop, and the development of a simpler API
with calls like object.moveTo() rather than matrix updates.
We then put different, interpretive programming languages on top of DIVER
to allow rapid prototyping without needing to know C or to recompile and
restart the system. Python was the prefered language at the time and
developers could work at remote stations and send the rendering commands
over to DIVER.
In addition to developing new models for virtual environments and creating
a virtual environment based perception lab, we also developed Alice to help people learn how to program in 3D.
Alice has come a long way since then and has undergone many re-writes and
programming language changes, but even twenty years later, with the occulus
rift, we are seeing folks re-invent and relearn some of the benefits and
problems that an immersive headset creates.