Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.


If you are interest in a student worker position dealing with context menus in immersive, virtual environment, click here.

Oct. 11, 2019

M.Sc. Networked Production Engineering

Networking is tool. Thus, the highly interdisciplinary Master program Networked Production Engineering” (NPE) enables students to obtain technology-related qualifications for our increasingly networked world of work. Thereby three specialization are offered: Additive Manufacturing, Smart Factory, and E-Mobility. Obviously, Virtual Reality as smart technology - to take production to the networked level - is one of many important aspects here.

Interested in finding out more? Watch NPE's imagefilm here and see our aixCAVE at 1:09.

Oct. 11, 2019

Sebastian Pick receives doctoral degree from RWTH Aachen University

Today, our colleague Sebastian Pick successfully passed his Ph.D. defense and received a doctoral degree from RWTH Aachen University for his thesis on "Interactive Data Annotation for Virtual Reality Applications". Congratulations!

July 15, 2019

Successful Presentations at ISC'19

At this year's International Supercomputing Conference (ISC) exhibition in Frankfurt, scientists of JARA-HPC have presented an application based on Unreal Engine for visualizing a direct numerical simulation of early flame kernel development. In the interactive exhibit, the visitors of the booth were provided with stereo glasses and flysticks to interactively explore the combustion simulation projected through the display wall. The application has gathered interest from a wide range of people, ranging from youth interested in utilization of game engines for scientific visualization, to domain scientists asking the specifics of the direct numerical simulation in display. The responses were overall positive, and included valuable feedback for further development. A particular theme was extension of the immersive visualization environment to arbitrary datasets.

June 20, 2019

Astronauts Cassidy and Hansen Visting Virtual AMS-02

The astronauts Christopher J. Cassidy (US) and Jeremy R. Hansen (Canada) are preparing for one of the most demanding missions in space: Replacing the cooling system at the Alpha Magnetic Spectrometer AMS-02 on the International Space Station ISS in at least five intensive space operations.
As the new cooling system was developed by RWTH, in close collaboration with the Massachusetts Institute of Technology MIT, NASA, and other partners, the astronauts visited Aachen to become familiar with the cooling system's technology. During their stay, they also visited our aixCAVE to experience a full-scale, virtual AMS-02, a demo prepared by Prof. Roßmann's Institute for Man-Machine Interaction.
News regarding this spectacular visit can, e.g., be found here (University news: visit announcement), here (University news: visit retrospective) and here (video by Aachener Nachrichten, German).

May 21, 2019

The Innovation Factory on RWTH Aachen Campus ...

... is ready for take-off and introduces itself to the public with a short video clip on youtube. Of course, VR is involved here as well - as one digital tool used in the development stage to gain initial insights.

Jan. 18, 2019

Recent Publications

High-Fidelity Point-Based Rendering of Large-Scale 3D Scan Datasets

IEEE Computer Graphics and Applications

Digitalization of 3D objects and scenes using modern depth sensors and high-resolution RGB cameras enables the preservation of human cultural artifacts at an unprecedented level of detail. Interactive visualization of these large datasets, however, is challenging without degradation in visual fidelity. A common solution is to fit the dataset into available video memory by downsampling and compression. The achievable reproduction accuracy is thereby limited for interactive scenarios, such as immersive exploration in Virtual Reality (VR). This degradation in visual realism ultimately hinders the effective communication of human cultural knowledge. This article presents a method to render 3D scan datasets with minimal loss of visual fidelity. A point-based rendering approach visualizes scan data as a dense splat cloud. For improved surface approximation of thin and sparsely sampled objects, we propose oriented 3D ellipsoids as rendering primitives. To render massive texture datasets, we present a virtual texturing system that dynamically loads required image data. It is paired with a single-pass page prediction method that minimizes visible texturing artifacts. Our system renders a challenging dataset in the order of 70 million points and a texture size of 1.2 terabytes consistently at 90 frames per second in stereoscopic VR.

Towards a Graphical User Interface for Exploring and Fine-Tuning Crowd Simulations

To be presented at: IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2020

Simulating a realistic navigation of virtual pedestrians through virtual environments is a recurring subject of investigations. The various mathematical approaches used to compute the pedestrians’ paths result, i.a., in different computation-times and varying path characteristics. Customizable parameters, e.g., maximal walking speed or minimal interpersonal distance, add another level of complexity. Thus, choosing the best-fitting approach for a given environment and use-case is non-trivial, especially for novice users. To facilitate the informed choice of a specific algorithm with a certain parameter set, crowd simulation frameworks such as Menge provide an extendable collection of approaches with a unified interface for usage. However, they often miss an elaborated visualization with high informative value accompanied by visual analysis methods to explore the complete simulation data in more detail – which is yet required for an informed choice. Benchmarking suites such as SteerBench are a helpful approach as they objectively analyze crowd simulations, however they are too tailored to specific behavior details. To this end, we propose a preliminary design of an advanced graphical user interface providing a 2D and 3D visualization of the crowd simulation data as well as features for time navigation and an overall data exploration.

Joint Dual-Tasking in VR: Outlining the Behavioral Design of Interactive Human Companions Who Walk and Talk with a User

To be presented at: IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2020

To resemble realistic and lively places, virtual environments are increasingly often enriched by virtual populations consisting of computer-controlled, human-like virtual agents. While the applications often provide limited user-agent interaction based on, e.g., collision avoidance or mutual gaze, complex user-agent dynamics such as joint locomotion combined with a secondary task, e.g., conversing, are rarely considered yet. These dual-tasking situations, however, are beneficial for various use-cases: guided tours and social simulations will become more realistic and engaging if a user is able to traverse a scene as a member of a social group, while platforms to study crowd and walking behavior will become more powerful and informative. To this end, this presentation deals with different areas of interaction dynamics, which need to be combined for modeling dual-tasking with virtual agents. Areas covered are kinematic parameters for the navigation behavior, group shapes in static and mobile situations as well as verbal and non-verbal behavior for conversations.

Disclaimer Home Visual Computing institute RWTH Aachen University