header

Welcome


bdrp


Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.

News

Immersive Art: Our Cooperation with Jana Rusch in Press

Some time ago, the contemporary Belgian painter Jana Rusch approached us to explore our mutual interest in cooperating with her in the area of immersive art. Our colleagues Sevinc Eroglu and Patric Schmitz directly came up with many ideas. Thus, they teamed up with Jana and created Rilievo, a virtual authoring environment for artistic creation in VR, enabling Jana to convert her 2D drawings effortless into 3D volumetric representations while relief sculpting allows volume manipulations. This successful cooperation and the resulting framework have now been presented in the press. Click here for the online article (in German only).

Oct. 5, 2022

WSCG 2022

Ali Can Demiralp presented his research paper on "Performance Assessment of Diffusive Load Balancing for Distributed Particle Advection" during the 30. International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision 2022 (WSCG2022).

May 18, 2022

VHCIE @ IEEE VR 2022

The Virtual Humans and Crowds in Immersive Environments (VHCIE) is a half-day workshop associated with the IEEE VR conference. In 2022, Andrea Bönsch teamed up with colleagues from France to organize the 7th edition of the workshop. During the workshop, her master student Daniel Rupp presented their work-in-progress on "An Embodied Conversational Agent Supporting Scene Exploration by Switching between Guiding and Accompanying", while her colleague Jonathan Ehret presented insights on "Natural Turn-Taking with Embodied Conversational Agents".

March 12, 2022

21th ACM International Conference on Intelligent Virtual Agents (IVA21)

Andrea Bönsch presented a paper at the 21th ACM International Conference on Intelligent Virtual Agents. Additionally, her student David Hashem submitted a GALA video showcasing the respective application of a virtual museum's curator who either guides the user or accompanies the user on his or free exploration. The video won the ACM IVA 2021 GALA Audience Award. Congratulations!

Sept. 17, 2021

ACM Symposium on Applied Perception (SAP2021)

Jonathan Ehret presented joined work with the RWTH Institute for Hearing Technology and Acoustics and the Cologne IfL Phonetik on the Influence of Prosody and Embodiment on the Perceived Naturalness of a Conversational Agents’ Speech. During the peer-reviewing process, the paper was invited and accepted to the Journal Transactions on Applied Perception (TAP). Congratulations!

Sept. 16, 2021

ICAT-EGVE 2021

Andrea Bönsch presented a poster on Indirect User Guidance by Pedestrians in Virtual Environments during the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2021).

Sept. 9, 2021

Recent Publications

pubimg
Performance Assessment of Diffusive Load Balancing for Distributed Particle Advection

30. International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision 2022 (WSCG2022)

Particle advection is the approach for the extraction of integral curves from vector fields. Efficient parallelization of particle advection is a challenging task due to the problem of load imbalance, in which processes are assigned unequal workloads, causing some of them to idle as the others are performing computing. Various approaches to load balancing exist, yet they all involve trade-offs such as increased inter-process communication, or the need for central control structures. In this work, we present two local load balancing methods for particle advection based on the family of diffusive load balancing. Each process has access to the blocks of its neighboring processes, which enables dynamic sharing of the particles based on a metric defined by the workload of the neighborhood. The approaches are assessed in terms of strong and weak scaling as well as load imbalance. We show that the methods reduce the total run-time of advection and are promising with regard to scaling as they operate locally on isolated process neighborhoods.

fadeout
 
pubimg
Quantitative Mapping of Keratin Networks in 3D

eLife

Mechanobiology requires precise quantitative information on processes taking place in specific 3D microenvironments. Connecting the abundance of microscopical, molecular, biochemical, and cell mechanical data with defined topologies has turned out to be extremely difficult. Establishing such structural and functional 3D maps needed for biophysical modeling is a particular challenge for the cytoskeleton, which consists of long and interwoven filamentous polymers coordinating subcellular processes and interactions of cells with their environment. To date, useful tools are available for the segmentation and modeling of actin filaments and microtubules but comprehensive tools for the mapping of intermediate filament organization are still lacking. In this work, we describe a workflow to model and examine the complete 3D arrangement of the keratin intermediate filament cytoskeleton in canine, murine, and human epithelial cells both, in vitro and in vivo. Numerical models are derived from confocal Airyscan high-resolution 3D imaging of fluorescence-tagged keratin filaments. They are interrogated and annotated at different length scales using different modes of visualization including immersive virtual reality. In this way, information is provided on network organization at the subcellular level including mesh arrangement, density, and isotropic configuration as well as details on filament morphology such as bundling, curvature, and orientation. We show that the comparison of these parameters helps to identify, in quantitative terms, similarities and differences of keratin network organization in epithelial cell types defining subcellular domains, notably basal, apical, lateral, and perinuclear systems. The described approach and the presented data are pivotal for generating mechanobiological models that can be experimentally tested.

fadeout
 
pubimg
Augmented Reality-Based Surgery on the Human Cadaver Using a New Generation of Optical Head-Mounted Displays: Development and Feasibility Study

JMIR Serious Games 2022

**Background:** Although nearly one-third of the world’s disease burden requires surgical care, only a small proportion of digital health applications are directly used in the surgical field. In the coming decades, the application of augmented reality (AR) with a new generation of optical-see-through head-mounted displays (OST-HMDs) like the HoloLens (Microsoft Corp) has the potential to bring digital health into the surgical field. However, for the application to be performed on a living person, proof of performance must first be provided due to regulatory requirements. In this regard, cadaver studies could provide initial evidence. **Objective:** The goal of the research was to develop an open-source system for AR-based surgery on human cadavers using freely available technologies. **Methods:** We tested our system using an easy-to-understand scenario in which fractured zygomatic arches of the face had to be repositioned with visual and auditory feedback to the investigators using a HoloLens. Results were verified with postoperative imaging and assessed in a blinded fashion by 2 investigators. The developed system and scenario were qualitatively evaluated by consensus interview and individual questionnaires. **Results:** The development and implementation of our system was feasible and could be realized in the course of a cadaver study. The AR system was found helpful by the investigators for spatial perception in addition to the combination of visual as well as auditory feedback. The surgical end point could be determined metrically as well as by assessment. **Conclusions:** The development and application of an AR-based surgical system using freely available technologies to perform OST-HMD–guided surgical procedures in cadavers is feasible. Cadaver studies are suitable for OST-HMD–guided interventions to measure a surgical end point and provide an initial data foundation for future clinical trials. The availability of free systems for researchers could be helpful for a possible translation process from digital health to AR-based surgery using OST-HMDs in the operating theater via cadaver studies.

fadeout
Disclaimer Home Visual Computing institute RWTH Aachen University