Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.


Andrea Bönsch gave a Zoom-based, invited talk at the 3rd Workshop on "Person-to-Person Interaction: From Analysis to Applications" in Rennes, France.

June 25, 2020

New Projects Online

If you are interested in our work, please take a look at our updated research and service projects. For example, learn more about the new EU project BugWright or the immersive visualization of cytoskeletons.

June 20, 2020

Sascha Gebhardt receives doctoral degree from RWTH Aachen University

Today, our colleague Sascha Gebhardt successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "Visual Analysis of Multi-dimensional Metamodels for Manufacturing Processess". Congratulations!

June 9, 2020

Daniel Zielasko receives doctoral degree from RWTH Aachen University

Today, our colleague Daniel Zielasko successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "DeskVR: Seamless Integration of Virtual Reality into Desk-based Data Analysis Workflows". Congratulations!

Feb. 21, 2020

If you are interest in a student worker position dealing with context menus in immersive, virtual environment, click here.

Oct. 11, 2019

M.Sc. Networked Production Engineering

Networking is tool. Thus, the highly interdisciplinary Master program Networked Production Engineering” (NPE) enables students to obtain technology-related qualifications for our increasingly networked world of work. Thereby three specialization are offered: Additive Manufacturing, Smart Factory, and E-Mobility. Obviously, Virtual Reality as smart technology - to take production to the networked level - is one of many important aspects here.

Interested in finding out more? Watch NPE's imagefilm here and see our aixCAVE at 1:09.

Oct. 11, 2019

Recent Publications

High-Fidelity Point-Based Rendering of Large-Scale 3D Scan Datasets

IEEE Computer Graphics and Applications

Digitalization of 3D objects and scenes using modern depth sensors and high-resolution RGB cameras enables the preservation of human cultural artifacts at an unprecedented level of detail. Interactive visualization of these large datasets, however, is challenging without degradation in visual fidelity. A common solution is to fit the dataset into available video memory by downsampling and compression. The achievable reproduction accuracy is thereby limited for interactive scenarios, such as immersive exploration in Virtual Reality (VR). This degradation in visual realism ultimately hinders the effective communication of human cultural knowledge. This article presents a method to render 3D scan datasets with minimal loss of visual fidelity. A point-based rendering approach visualizes scan data as a dense splat cloud. For improved surface approximation of thin and sparsely sampled objects, we propose oriented 3D ellipsoids as rendering primitives. To render massive texture datasets, we present a virtual texturing system that dynamically loads required image data. It is paired with a single-pass page prediction method that minimizes visible texturing artifacts. Our system renders a challenging dataset in the order of 70 million points and a texture size of 1.2 terabytes consistently at 90 frames per second in stereoscopic VR.

A Three-Level Approach to Texture Mapping and Synthesis on 3D Surfaces

Proceedings of the ACM on Computer Graphics and Interactive Techniques, Vol. 3, No. 1, 2020

We present a method for example-based texturing of triangular 3D meshes. Our algorithm maps a small 2D texture sample onto objects of arbitrary size in a seamless fashion, with no visible repetitions and low overall distortion. It requires minimal user interaction and can be applied to complex, multi-layered input materials that are not required to be tileable. Our framework integrates a patch-based approach with per-pixel compositing. To minimize visual artifacts, we run a three-level optimization that starts with a rigid alignment of texture patches (macro scale), then continues with non-rigid adjustments (meso scale) and finally performs pixel-level texture blending (micro scale). We demonstrate that the relevance of the three levels depends on the texture content and type (stochastic, structured, or anisotropic textures).

Calibratio - A Small, Low-Cost, Fully Automated Motion-to-Photon Measurement Device

10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), 2020

Since the beginning of the design and implementation of virtual environments, these systems have been built to give the users the best possible experience. One detrimental factor for the user experience was shown to be a high end-to-end latency, here measured as motionto-photon latency, of the system. Thus, a lot of research in the past was focused on the measurement and minimization of this latency in virtual environments. Most existing measurement-techniques require either expensive measurement hardware like an oscilloscope, mechanical components like a pendulum or depend on manual evaluation of samples. This paper proposes a concept of an easy to build, low-cost device consisting of a microcontroller, servo motor and a photo diode to measure the motion-to-photon latency in virtual reality environments fully automatically. It is placed or attached to the system, calibrates itself and is controlled/monitored via a web interface. While the general concept is applicable to a variety of VR technologies, this paper focuses on the context of CAVE-like systems.

Disclaimer Home Visual Computing institute RWTH Aachen University