Vrui's overriding goal is to support development of correct, portable, and usable applications. In this context, portable means that an application is developed in one environment – typically a desktop environment – but runs correctly in any environment. Usable means that a Vrui application run in a desktop environment is exactly as effective as a native desktop application, and that the same Vrui application run in a CAVE or other immersive environment type is exactly as effective as an application developed natively for that specific environment.
Unlike glut, Vrui is portable to non-desktop environments. Here, portable means that a Vrui application is only written once, and then runs in any environment, without even having to be recompiled. This portability is achieved by shielding application writers much more from the underlying display system as glut does: Vrui applications do not have to open their windows or set up their OpenGL rendering contexts, and they do not directly receive input from mouse or keyboard. In fact, handling of input is probably the biggest difference between Vrui and glut.
An additional difference between Vrui and glut is that Vrui consists of an entire hierarchy of layered libraries that work together to support developers in writing correct, portable, and usable applications. For example, Vrui contains a comprehensive cluster-transparent file I/O handling library, explicit high-performance intra-cluster communication, a comprehensive library for affine and projective 3D geometry, OpenGL support classes supporting generic programming, an OpenGL-based GUI widget set, and a scene graph library. While all these are completely optional, and Vrui is intentionally designed to be as compatible as possible with third-party libraries, developers are encouraged to use the highest-level available abstractions provided by the entire Vrui package.
Ignoring everything else, though, the widget set offered by Vrui is comparable to those offered by Qt or Gtk+ or other 2D GUI toolkits, albeit not as complete (yet). Vrui's GUI widgets are three-dimensional, since they are intended to work in a virtual three-dimensional display space, but their functionality and layout is very similar to 2D GUI widgets. There are dialogs, menus, buttons, sliders, etc., just as usual. From a programming point of view, Vrui's GUI widgets more or less follow the approach of OSF/Motif, in that they primarily rely on automatic layout based on a hierarchical description, and on (C++-style) callbacks to connect widgets to application behavior. One could say that Vrui's widget set is a complete rip-off of OSF/Motif, translated to C++.
In other words, Vrui is primarily a display and interaction engine, and comparable to the lower levels contained in any high-level game engine. As a result, given a game engine that has a clear separation of responsibilty between its layers, it can be possible to layer an existing game engine on top of Vrui, for the added benefit of being able to run that game engine in non-desktop environments. However, because most commercial or free game engines do not make this clear distinction, doing so might be difficult in practice.
GUI toolkits like Qt or Gtk+ remedied that problem by introducing a higher level. Instead of working with windows and mouse events, applications could now use "widgets" with specific purposes and widget events, leading to much better user interfaces, and, more important, consistent interfaces between applications.
Vrui attempts to do to immersive 3D graphics what Qt et al. did to 2D GUIs. It provides higher-level interfaces, to prevent individual applications from "reinventing the wheel," and instead foster consistent user interfaces. Instead of dealing directly with OpenGL windows and 4x4 matrices representing input devices, Vrui applications render into a toolkit-provided 3D application space, and receive higher-level events from input devices. As a concrete example, in other toolkits navigation, i.e., the mapping from 3D application space to display space, is handled by each application individually, whereas in Vrui it is handled by the toolkit. Overall, the result is that Vrui applications with widely differing purposes "look & feel" the same.
An intended side effect of Vrui's higher-level abstractions is that Vrui applications are portable between vastly different types of display environments. A single Vrui application will run in a CAVE like a native CAVE application, and will run on the desktop very similarly to a native desktop application. In comparison, other toolkits require applications to be developed for a narrower range of target environments, sometimes requiring different code for single-screen and multi-screen environments, and none that we are aware of support running applications on desktop environments in any way that is useful beyond basic debugging.
An additional difference is that Vrui consists of an entire hierarchy of layered libraries that work together to support developers in writing correct, portable, and usable applications. For example, Vrui contains a comprehensive cluster-transparent file I/O handling library, explicit high-performance intra-cluster communication, a comprehensive library for affine and projective 3D geometry, OpenGL support classes supporting generic programming, an OpenGL-based GUI widget set, and a scene graph library. While all these are completely optional, and Vrui is intentionally designed to be as compatible as possible with third-party libraries, developers are encouraged to use the highest-level available abstractions provided by the entire Vrui package.
For example, Vrui's underlying geometry library is used throughout Vrui's API instead of passing positions or orientations as flat 3- or 4-element arrays or 4x4 matrices. Vrui uses abstract geometry classes such as Point and Vector for affine points and vectors, respectively, and a hierarchy of abstract transformation classes from translations or rotations only over rigid body transformations to fully-general affine or projective transformations. All these classes have full sets of algebraic operations, which means application code can in almost all cases use them as black boxes. These higher-level classes significantly reduce the burden on application developers to either write their own algebraic operations, such as matrix inversion, or continuously convert back-and-forth between the toolkit's flat representation and the representation of a third-party geometry library they want to use. An intended side-effect of the use of higher-level abstractions at the API is that implicit constraints can be made explicit. For example, tracked 6-DOF input devices can only change position and orientation, and instead of representing those as generic projective transformations, i.e., 4x4 matrices, Vrui represents them as rigid-body transformations, i.e., a translation vector plus a unit quaternion. This makes arithmetic involving input devices easier, more efficient, and more robust. That said, all classes have methods to convert from/to flat array representations to be backwards-compatible with third-party libraries an application developer might want to use.
Vrui also builds and runs on Mac OS X, but there are currently some Linux-only features, particularly handling of sound and video, some non-standard required libraries – libpng, libjpeg, libtiff, libusb – that need to be installed from source or using software management systems such as homebrew, and Vrui is generally more thoroughly tested on Linux.
Vrui does not build on Windows. Microsoft's Visual C++, at least the most recent version tested, cannot deal with some of the C++ template constructs used in Vrui. Other C++ compilers, such as Intel's, can do those, but they are typically quite expensive. Even with a proper C++ compiler, several of the underlying architecture decisions in Vrui are deeply rooted in the UNIX paradigm. Porting Vrui would take significant initial effort, and continual effort to maintain a split codebase.
Vrui does build and run under UNIX emulation systems such as cygwin, but it will not be particularly useful because no version of cygwin we have tried offers hardware-accelerated 3D graphics. As Vrui applications rely heavily on high-performance graphics, they will barely be usable under cygwin.
The same problem applies if Linux is run inside a virtual machine. No virtual machine hypervisor we have tried offers hardware-accelerated 3D graphics inside the virtual machine, and Vrui applications will barely be usable as a result.
The Vrui run-time environment itself is configured via the Vrui.cfg configuration file. This file defines the complete display environment, including the positions, orientations, and sizes of all screens, all viewers present in an environment, the OpenGL windows used to render 3D views to those screens, the number and types of 3D input devices presented by the low-level device driver, which tools are available, etc.
Concretely, VRDevices.cfg is read when the low-level device driver is started, whereas Vrui.cfg is read whenever a Vrui application is started. Both configuration files together define the complete set-up of the display environment in which Vrui runs, and need to be created and/or adapted carefully based on the details of a concrete environment. In a shared display environment like a CAVE, this task would typically fall to a dedicated system integrator / administrator, whereas configuration is more or less unnecessary in single-user desktop environments, where the default setup will cover the common cases.
After these initial steps, developers are encouraged to use several existing Vrui applications to see how they look and feel, especially as they are run in different display environment types, and then peruse the HTML documentation that does exist. A good starting point is the "Library Overview" section, which lists the component library layers that make up the whole Vrui package, and in turn list all the header files and classes contained in those libraries. The interfaces of all those classes are defined in their respective header files, including detailed comments on all interface methods and functions.
Another important starting point is the Vrui kernel API in include/Vrui/Vrui.h, which declares all core functions applications use to communicate with Vrui. Due to Vrui's microkernel architecture, most of its functionality is not provided by the kernel itself, but by delegate classes such as Vrui::VRWindow, or by external so-called managers such as the input device manager. References to these managers are retrieved through the kernel API, but afterwards applications communicate directly with those managers. The bottom line is that all of Vrui's functionality is accessed through the kernel API, either directly or indirectly via managers or delegate classes.
While the low-level documentation of classes and interfaces provided by the source code comments are reportedly rather good, they don't tell in which scenarios a developer might use a certain class. This will hopefully be addressed by higher-level development guides in the near future. In the meantime, another important resource is the "Development Rules" document, which lists common pitfalls stemming from Vrui's difference in philosophy to other VR or 3D graphics toolkits, and its focus on portability and usability. For example, the way Vrui applications are supposed to handle user input is Vrui's one feature most distinct from other toolkits.
To use a concrete example, assume an application that wants to do something when the user presses a specific key. In glut et al., this would be handled by registering a keyboard event callback, which will in turn check the identity of the just-pressed key when called, and invoke the proper application behavior if the desired key is pressed. In Vrui, application behaviors are implemented as tools. If an application has some behavior X that is to be invoked when some key is pressed, the developer would create a corresponding tool class X and hand it to Vrui's tool manager during start-up. After that, a user can dynamically bind a tool object of class X to any button she desires; after that, if she presses that button, the bound tool object will be called, and can in turn invoke behavior X.
While this approach sounds more complicated, it has important benefits. For one, it works in non-desktop environments. Even if there is no keyboard, there will still be some input device that has some ways to signal that a user wants to initiate an event (which could be an actual button, or a gesture, or a voice command, etc.), and Vrui's dynamic tool binding mechanism allows the user to bind a tool of class X to any such event source, without the application developer having to code any support for that. Even if constrained to the desktop, this allows users to map application behaviors to any keys they desire – in other words, the keyboard remapping functionality common in PC games comes for free in Vrui.
Additionally, in actual code, setting up a tool is no more complicated than writing a callback. For simple cases like the one above, where a key press invokes some behavior, Vrui offers a convenience shortcut that creates a tool class, passes it to Vrui's tool manager, handles dynamic binding, and invokes an arbitrary application callback when an event happens, in a single line of code (see VruiEventToolDemo.cpp in the ExamplePrograms subdirectory for several concrete examples). Only in more complex cases, such as when behaviors require their own internal states, will a developer have to implement an actual tool class, which would be essentially the same amount of work as in other toolkits.
Another common example is an application needing to know how big to make a display, like a text label, or how big to make the influence zone around an interaction event, based on display resolution. In Vrui, these parameters are configured by the system integrator/administrator appropriately for a concrete environment, and can be queried directly via the Vrui kernel API.
In other words, Vrui applications should always render in 3D application space in any unit of measurement they want, advertise that unit to Vrui's coordinate manager, and let Vrui take care of the rest. Any questions about appropriate display sizes, interaction fuzz values, optimal font sizes, etc. should be answered by querying the Vrui kernel API whenever needed.
In the first scenario, where all displays have comparable resolutions, but X lies, you will have to disable automatic screen size detection (by setting autoScreenSize to false in your window section), and enter the correct screen size in your screen section. In single-desktop setups, the screen size is the combined size of all screens, not that of a single screen.
This will not work if the screens' resolutions are different. If that's the case, getting Vrui to run properly will take some effort -- after all, it has to deal with a single (virtual) display that suddenly changes resolution in the middle. The only way to deal with it is having multiple separate windows, one on each physical display, and use a separate screen section with its own screen size for each window. This will result in correct display, but it means that it will not be possible to move the windows between screens. In other words, when there are multiple display resolutions involved, it's best not to use a single-desktop setup, but instead use multiple X screens.
As far as application programmers are concerned, the details of managing multiple windows, sharing or replicating OpenGL contexts, and parallel rendering are completely hidden by the Vrui API. See the GLContextData document for a detailed explanation of the underlying abstraction mechanism.