header

Research Areas


While selected projects of our past research can be found here, this page presents some examples of our current research projects. If you are interested in one of the topics please contact the indicated person for further detailed information and check out our publications page where you can find papers and additional download material.


Our work is characterized by basic as well as application-oriented research in collaboration with other RWTH institutes from multiple faculties, Forschungszentrum Jülich, industrial companies, and other research groups from around the world in largely third-party funded interdisciplinary joint projects.

Virtual Reality (VR) has proven its potential to provide an innovative human-computer interface, from which multiple application areas can profit. The VR application fields we are working on comprise architecture, mechanical engineering, medicine, life science, psychology, and more.

Specifically, we perform research in multimodal 3D interaction technology, immersive 3D visualization of complex simulation data, parallel visualization algorithms, and virtual product development. In combination with our service portfolio, we are thus able to inject VR technology and methodology as a powerful tool into scientific and industrial workflows and to create synergy effects by promoting collaborations between institutes with similar areas of VR applications.

If you are interested in a cooperation or in using our VR infrastructure, please contact us via e-mail: info@vr.rwth-aachen.de


Examples of Past Research Projects

Selected projects of our past research can be found here.

Examples of Current Research Projects

Sort current research projects by:    


The Cluster of Excellence: Internet of Production - Immersive Visualization of Artificial Neural Networks (ANNs)

funding German Research Council DFG
contact person Martin Bellgardt

Within the Cluster of Excellence: Internet of Production, a major point of focus for us are ANNs. They have sparked great interest in nearly all scientific disciplines in the recent years. They achieve unprecedented accuracy in almost any modeling task they are applied to. Unfortunately, their main downside is the so called "black box problem", which means that it is not yet possible to extract the knowledge learned by ANNs, to gain insights into the underlying processes.

We approach this problem through immersive visualization, by drawing the network as a node-link diagram in three dimension. Investigating techniques to layout, filter and interact with ANNs and data in immersive environments, we hope to achieve an interactive visual representations that will allow for meaningful insights into their inner workings. Potential benefits might be a better understanding of ANNs and their hyperparameters, a better explainability of the reasons for their decisions, or even insights into physical processes, underlying the tasks they solve

ANN as a node-link diagram in three dimensions


Back to project list  -  Back to top

The Human Brain Project - Interactive Visualization, Analysis, and Control

funding EU project
contact person Simon Oehrl

The Human Brain Project (HBP) started in October 2013 and is currently in its third and last phase under the special grant agreement 3 (SGA3). The long-term goal of the project is to create infrastructure and tools to support the neuroscientific research of the human brain. This includes the development of neuronal network simulators and their deployment in compute centers around Europe. These compute resources enable exascale simulations of complex biological networks.

The Virtual Reality Group at RWTH Aachen University works together with the University of Trier as well as the Universidad Rey Juan Carlos and the Universidad Politénica in Madrid on the task regarding the visualization of such simulations. The Virtual Reality Group in particular works on a pipeline that is able to extract live data out of running simulations to support in-transit visualization by connecting to established visualization frameworks. Ongoing development on integrating steering capabilities to the pipeline enables users to alter running simulations up to the exascale level to create new and improved workflows for neuroscientists.

The official HBP logo (left) and a sketch of Insite, our generalized pipeline for in-situ visualization of neural network simulations (right).


Back to project list  -  Back to top

Parallel Particle Advection and Finite-Time Lyapunov Field Extraction on Brain Data

funding internal
contact person Ali C. Demiralp

3D-Polarized Light Imaging (3D-PLI) is a recent neuroimaging technique enabling the study of structural connectivity of human and animal brains at unprecedented resolutions, reaching up to 1.3 micrometers within the sectioning plane. This method is applied to serial microtome sections of entire brains, and exploits the birefringent properties of the myelin sheaths surrounding axons.

Our contribution is a distributed, data-parallel particle advection pipeline to conduct large-scale streamline tractography analyses on 3D-PLI vector fields. This enables visualization of long range fiber tracts within the brain at previously unavailable resolutions. We further support computation of attracting and repelling Finite-Time Lyapunov Exponent (FTLE) fields in brain data, which yield more stable results than traditional topological approaches, revealing insight on the behavior of fiber pathways and physical structure.

Streamlines extracted from a partial human dataset.


Back to project list  -  Back to top

Dynamic Load Balancing for Parallel Particle Advection

funding internal
contact person Ali C. Demiralp

Particle advection is the standard approach for extraction of integral curves from vector fields and also forms a basis for advanced feature extraction techniques such as Lagrangian Coherent Structures. Yet, efficient parallelization of particle advection is a challenging task due to the problem of load imbalance, in which processes are assigned unequal workloads, causing some of them to idle as the others are performing compute.

Our contributions are two distributed load balancing methods for particle advection based on the general family of diffusive load balancing. We enable each process to access the blocks of its neighboring processes, which are then utilized to dynamically share particles based on a metric defined by the local workload of the neighborhood. The approaches are assessed in terms of performance and are shown to perform particularly well in settings where seed points are localized, common in exploratory visualization scenarios.

Back to project list  -  Back to top

Interactive Heart Physiology in Virtual Reality

funding ETS Grants für Lehr und Lern-Innovationen, RWTH Aachen University
contact person Lukas Schröder

A challenge in biology is to teach cardiac psychology in a practical way. Until now, students have been taught this with the help of crayfish in a lab course, to get a better grip on the topic. These were dissected and the students were asked to perform experiments on the active heart.

In the experiments, the heart was injected with two substances that alter the hearts amplitude and frequency. The typically used substances here are adrenaline and acetylcholine, which have opposite effects. On the open heart, the students then measured the response of the heart regarding the applied substance concentrations. Due to rejection by students and newer guidelines, such animal experiments are getting less common due to the amount of paper work required and the ethical issues.

To give an alternative, we develop a virtual reality application to mitigate some of these challences in cooperation with the Chemosensation Laboratory of RWTH Aachen University. In the future, this application shall replace the alive crayfish and allow easier execution of the experiment. For this purpose, multiple Oculus Quests are used as a hardware platform, which are non-stationary VR headsets. The focus is on an immersive experience with best possible simulated test results.

In the developed application, the user can execute the different steps of the real experiment on a virtual heart. For this, the virtual heart has two instances, one of which shows a cross-section to enable the students to see the inside of the heart. Since the experiment in its original form should show the effects of the different substances on a crayfish heart, this is changed to a human heart in the simulation. The virtually measured responses of the hearts can be recorded and exported to allow for the usual written report of the experiment execution.

Overview of the VR-based application (left) and a close-up of the open human heart (right).
The 3D model of the heart is taken from here.


Back to project list  -  Back to top

Artistic Scene Authoring in VR

funding internal
contact person Sevinc Eroglu

Artists' demand for VR as an art medium is growing. Artistic creation while being fully immersed in the virtual art piece is an exciting new possibility that opens new research opportunities with challenging user interface design questions. Painting and sculpting applications in VR are emerging, which enable artists to create their artwork directly in VR and resemble common 3D modeling tools. However, to create new 3D compositions in VR out of existing 2D artworks, artists are bound to employ existing desktop-based content creation tools. This requires manual work and sufficient knowledge of expert modeling environments.

To close this gap, we created "Rilievo", a virtual authoring environment that enables the effortless conversion of 2D images into volumetric 3D objects. Artistic elements in the input material are extracted with a convenient VR-based segmentation tool. Relief sculpting is then performed by interactively mixing different height maps. These are automatically generated from the input image structure and appearance. A prototype of the tool is showcased in an analog-virtual artistic workflow in collaboration with a traditional painter. It combines the expressiveness of analog painting and sculpting with the creative freedom of spatial arrangement in VR.


Back to project list  -  Back to top

Immersive Sketching in VR

funding internal
contact person Sevinc Eroglu

Our research investigates the potential of VR as a medium of artistic expression that enables artists to create their work in a new perspective and with new expressive possibilities. To this end, we created "Fluid Sketching", an immersive 3D drawing environment that is inspired by traditional marbling art. It allows artists to draw 3D fluid-like sketches and manipulate them via six degrees of freedom input devices. Different brush stroke settings are available, varying the characteristics of the fluid. Because of fluids’ nature, the diffusion of the drawn fluid sketch is animated, and artists have control over altering the fluid properties and stopping the diffusion process whenever they are satisfied with the current result. Furthermore, they can shape the drawn sketch by directly interacting with it, either with their hand or by blowing into the fluid. We rely on particle advection via curl-noise as a fast procedural method for animating the fluid flow.


Back to project list  -  Back to top

Advanced Rendering Techniques for Virtual Reality

funding internal
contact person Simon Oehrl

Virtual Reality (VR) systems have high requirements on the resolution and refresh rate of their displays to ensure a pleasant user experience. Rendering at such resolutions and framerates brings even powerful hardware to its limits. There exist various techniques utilizing the spatial correlation of the pixels rendered from each eye as well as the temporal correlation of the pixels in different frames to enhance the rendering performance. These techniques often introduce artifacts due to missing or outdated information, so that they always trade off rendering performance versus visual quality.

We investigate such reprojection methods in order to further improve rendering in VR applications. In particular such techniques can be used in combination with low latency compression to enable wireless streaming of VR applications from desktop computers to head mounted displays as illustrated here.

The figure shows the problem reprojection techniques try to solve: rendering the scene using the image from a slightly altered camera position as an input.


Back to project list  -  Back to top

Low Latency Streaming and Advanced Reprojection Techniques for Wireless VR Experiences

funding internal
contact person Simon Oehrl

Head mounted displays (HMDs) are nowadays the preferred way to enjoy Virtual Reality (VR) applications due to their low cost and minimal setup requirements. However, as all VR display technologies, driving these high resolution and high refresh rate displays proposes high requirements on the hardware. To ensure high visual quality of the scene the HMD needs to be connected to a desktop computer to utilize their compute power.

The bandwidth required for the high data rates usually requires a cable connecting the HMD and the computer. This cable is usually considered cumbersome as it restricts the movement of the user. Future development of wireless technologies will eventually provide enough bandwidth to switch from cable-based communication to a wireless communication between the HMD and the computer. We, however, currently investigate methods in low latency image compression as well as adapted reprojection techniques to implement such wireless communication for the current consumer hardware.

Concept of the wireless VR setup: the HMD sends the virtual camera position to desktop computer which will render the scene and send the resulting image back to the HMD.


Back to project list  -  Back to top

Social Locomotion with Virtual Agents

funding internal
contact person Andrea Bönsch

Computer-controlled, embodied, intelligent and conversational virtual agents (VAs) are increasingly common in various immersive virtual environments (IVEs). They can enliven architectural scenes turning them into plausible, realistic and thus convincing sceneries. Considering, for instance, the visualization of a virtual production facility, VAs might serve as virtual workers, who autonomously operate in the IVE. In addition, VAs can function as assistants by, e.g., guiding users through a scene, training them how to perform certain tasks, or being interlocutors answering questions. Thus, VAs can interact with the human user, forming a social group, while standing still in or moving through the virtual scene.

For the latter, social locomotion is of prime importance. This involves aspects such as finding collision-free trajectories, finding suitable walking constellations based on the situation-dependent interaction (e.g., leading vs. walking side-by-side), respecting the other’s personal space, showing an adequate gazing behavior and body posture as well as many more. Our research focuses on modeling such social locomotion behaviors for virtual agents in architectural scenarios. The resulting locomotion pattern should thereby be applicable in high-end CAVE-like environment as well as in low-cost head-mounted displays. Furthermore, the behavior’s impact on the user’s perceived presence, social presence and comfort is an object of investigation.


Back to project list  -  Back to top

Speech and Gestures of Embodied Conversational Agents

funding internal
contact person Jonathan Ehret

When interacting with embodied conversational agents in VR the conversation has to feel as natural as possible. These agents can function as high-level human-machine-interfaces, can guide or teach the user or simply enliven the virtual environment. For these interactions to feel believable the speech and co-verbal gestures (incl. backchannels) play an important role. Furthermore also the acoustical reproduction, i.e., the way the speech is presented to a user, is of prime importance.

To that end we investigate how different acoustical phenomena, like directivity, influence the perception of the virtual agent considering its realism and its believability, e.g., social presence. Furthermore we look into different ways to generate co-verbal gestures and how they influence the conversation and whether the user’s behavior potentially changes due to these features.


Back to project list  -  Back to top

Personal Space in Social Virtual Realiy

funding internal
contact person Andrea Bönsch

One important aspect during social locomotion is respecting each other’s personal space (PS). This space is defined as a flexible protective zone maintained around one’s own body in real-life situations as well as in virtual environments. However, the PS is regulated dynamically, its size and shape depends on different personal factors like age, gender, environmental factors like obstacles, gazing behavior, and many more. Furthermore, the personal space is divided into four segments differing in the distance from the user, reflecting the type of relationship between persons. Violating the PS furthermore evokes different levels of discomfort and physiological arousal. Thus, gaining more insight into this phenomenon is important, while letting VAs automatically and authentically respect the PS of a user or other virtual teammates is challenging.

After conducting an initial study regarding PS and collision avoidance during user navigation through a small-scale IVE (see left figure), we now focus on further PS investigations with respect to emotions shown by virtual agents (see right figure) as well as the number of virtual agents a user is facing. Our goal is to develop an appropriate behavioral setting for autonomous VAs in order to embed them into various architectural scenarios while keeping a high user comfort level, immersion and perceived social presence.

A virtual agent blocking a user's path in a two-man office (left). A user indicating his personal space while being approached by a virtual agent (right).


Back to project list  -  Back to top

BugWright2 - Autonomous Robotic Inspection and Maintenance on Ship Hulls and Storage Tanks

funding European Union’s Horizon 2020
contact person Sebastian Pape

Due to the progressively increasing globalization and the outsourcing of production into lower wage countries, there is an increasing demand for large, long distance cargo and component transport. Big parts of this transport demands are done via container ships, cruising around the world.

So far, the inspection and maintenance of these container ships is done in fixed intervals in the drydocks of shipyards. However, the inspection downtimes and the corresponding costs of inspections without findings could be avoided if the condition of the ships' hulls is monitored constantly. Even the interval times could be increased, if small damages and biofouling could be treated locally, in a harbor or at sea.

This idea is the aim of the EU project BugWright2. In this project semiautonomous robots should be developed, which are able to inspect the ship hull for corrosion, without the need of a drydock. Additionally, the robots should be able to clean the hull from microorganisms and algae to avoid biofouling.

The technical implementation of this project will be carried out jointly by a group of 21 European partners under the lead of Prof. Cédric Pradalier from GeorgiaTech Lorraine in France. Beside the academic partners like the Norwegian University of Science and Technology, the University of Porto or the World Maritime University in Malmö, research institutions like the Austrian Lakeside Labs and shipyards owners like the Star Bulk Carriers Corp. are part of the group.

We will lead the development of an augmented- and virtual reality (AR/VR) steering and monitoring system for the planned robots. To achieve this goal in a sustainable fashion we work in a close collaboration with Prof. Thomas Ellwart, the head of the department for Business Psychology at the University of Trier. In this direct collaboration research on innovative interaction techniques with state-of-the-art AR/VR technology will be conducted to enable better interfaces in the later project stages. Here, special focus is laid on the intuitive usability of the resulting AR and VR system, to give a significant additional value over the traditional desktop-based approaches.

The project's goal is the development of semi-autonomous robots that clean the outer walls of container ships and scan them for damages. Robot navigation and monitoring of the work progress will be carried out using an AR/VR-based software solution, while the localization of the robots is ensured by drones above and under water.


Disclaimer Home Visual Computing institute RWTH Aachen University