at RWTH Aachen University!
The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is a member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.
In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.
In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.
To this end, we are members of / associated with the following institutes and facilities:
Our offices are located in the RWTH IT Center, where we operate one of the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.
Cover on the German GI Informatik Spektrum
The cover of the current issue of Informatik Spektrum of the Gesellschaft für Informatik e.V. (GI) presents results of a project between the EON Energy Research Center and us on an important issue. The use of air filters in classrooms to fight the ongoing COVID-19 pandemic has been and continues to be a much-discussed topic. The cover shows a visualization in our aixCAVE, enabling an analysis of the temporal and spatial dynamics of aerosol concentration for each person in the respective room. Virtual reality is proving to be an effective tool for scientists here. It demonstrates the potential risk of aerosol dispersion in enclosed spaces with many people, which can be intuitively experienced even by laypersons.
Additional information on this project is provided in the IT Center Annual Report 2020/2021, page 58f (german only).
|Dec. 16, 2022|
BugWright: Succesful on-site Field Tests
In a nutshell, our EU project BugWright2 deals with the development of semiautonomous robots which are able to inspect and constantly monitor ship hulls of container ships for corrosion. In September, Simon Oehrl and Sebastian Pape travelled with colleagues from University of Trier to Metz, France to test their current implementations on-site. A short travel report is now available online.
|Oct. 11, 2022|
Immersive Art: Our Cooperation with Jana Rusch in Press
Some time ago, the contemporary Belgian painter Jana Rusch approached us to explore our mutual interest in cooperating with her in the area of immersive art. Our colleagues Sevinc Eroglu and Patric Schmitz directly came up with many ideas. Thus, they teamed up with Jana and created Rilievo, a virtual authoring environment for artistic creation in VR, enabling Jana to convert her 2D drawings effortless into 3D volumetric representations while relief sculpting allows volume manipulations. This successful cooperation and the resulting framework have now been presented in the press. Click here for the online article (in German only).
|Oct. 5, 2022|
Ali Can Demiralp presented his research paper on "Performance Assessment of Diffusive Load Balancing for Distributed Particle Advection" during the 30. International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision 2022 (WSCG2022).
|May 18, 2022|
VHCIE @ IEEE VR 2022
The Virtual Humans and Crowds in Immersive Environments (VHCIE) is a half-day workshop associated with the IEEE VR conference. In 2022, Andrea Bönsch teamed up with colleagues from France to organize the 7th edition of the workshop. During the workshop, her master student Daniel Rupp presented their work-in-progress on "An Embodied Conversational Agent Supporting Scene Exploration by Switching between Guiding and Accompanying", while her colleague Jonathan Ehret presented insights on "Natural Turn-Taking with Embodied Conversational Agents".
|March 12, 2022|
21th ACM International Conference on Intelligent Virtual Agents (IVA21)
Andrea Bönsch presented a paper at the 21th ACM International Conference on Intelligent Virtual Agents. Additionally, her student David Hashem submitted a GALA video showcasing the respective application of a virtual museum's curator who either guides the user or accompanies the user on his or free exploration. The video won the ACM IVA 2021 GALA Audience Award. Congratulations!
|Sept. 17, 2021|
Performance Assessment of Diffusive Load Balancing for Distributed Particle Advection
30. International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision 2022 (WSCG2022)
Particle advection is the approach for the extraction of integral curves from vector fields. Efficient parallelization of particle advection is a challenging task due to the problem of load imbalance, in which processes are assigned unequal workloads, causing some of them to idle as the others are performing computing. Various approaches to load balancing exist, yet they all involve trade-offs such as increased inter-process communication, or the need for central control structures. In this work, we present two local load balancing methods for particle advection based on the family of diffusive load balancing. Each process has access to the blocks of its neighboring processes, which enables dynamic sharing of the particles based on a metric defined by the workload of the neighborhood. The approaches are assessed in terms of strong and weak scaling as well as load imbalance. We show that the methods reduce the total run-time of advection and are promising with regard to scaling as they operate locally on isolated process neighborhoods.
Advantages of a Training Course for Surgical Planning in Virtual Reality in Oral and Maxillofacial Surgery
JMIR Serious Games (forthcoming/in press)
**Background**: As an integral part of computer-assisted surgery, virtual surgical planning(VSP) leads to significantly better surgery results, such as for oral and maxillofacial reconstruction with microvascular grafts of the fibula or iliac crest. It is performed on a 2D computer desktop (DS) based on preoperative medical imaging. However, in this environment, VSP is associated with shortcomings, such as a time-consuming planning process and the requirement of a learning process. Therefore, a virtual reality VR)-based VSP application has great potential to reduce or even overcome these shortcomings due to the benefits of visuospatial vision, bimanual interaction, and full immersion. However, the efficacy of such a VR environment has not yet been investigated. **Objective**: Does VR offer advantages in learning process and working speed while providing similar good results compared to a traditional DS working environment? **Methods**: During a training course, novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis) for the segmentation of fibulae and os coxae (n = 156), and they were askedto carry out the maneuvers as accurately and quickly as possible. The individual learning processes in both environments were compared usingobjective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by two experienced radiologists in a blinded manner (score). **Results**: During a training course, novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis)for the segmentation of fibulae and os coxae (n = 156), and they were asked to carry out the maneuvers as accurately and quickly as possible. The individual learning processes in both environments were compared using objective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by two experienced radiologists in a blinded manner (score). **Conclusions**: The more rapid learning process and the ability to work faster in the VR environment could save time and reduce the VSP workload, providing certain advantages over the DS environment.
Verbal Interactions with Embodied Conversational Agents
Doctoral Consortium at ACM International Conference on Intelligent Virtual Agents (IVA) 2022
Embedding virtual humans into virtual reality (VR) applications can fulfill diverse needs. These, so-called, embodied conversational agents (ECAs) can simply enliven the virtual environments, act for example as training partners, tutors, or therapists, or serve as advanced (emotional) user interfaces to control immersive systems. The latter case is of special interest since we as human users are specifically good at interpreting other humans. ECAs can enhance their verbal communication with non-verbal behavior and thereby make communication more efficient. For example, backchannels, like nodding or signaling not understanding, can be used to give feedback while a user is speaking. Furthermore, gestures, gaze, posture, proxemics, and many more non-verbal behaviors can be applied. Additionally, turn-taking can be streamlined when the ECA understands when to take over the turn and signals willingness to yield it once done. While many of these aspects are already under investigation from very different disciplines, operationalizing those into versatile, virtually embodied human-computer interfaces remains an open challenge. To this end, I conducted several studies investigating acoustical effects of ECAs' speech, both with regard to the auralization in the virtual environment and the speech signals used. Furthermore, I want to find guidelines for expressing both turn-taking and various backchannels that make interactions with such advanced embodied interfaces more efficient and pleasant, both when the ECA is speaking and during listening. Additionally, measuring social presence (i.e., the feeling of being there and interacting with a ``real'' person) is an important instrument for this kind of research, since I want to facilitate exactly those subconscious processes of understanding other humans, which we as humans are particularly good at. Therefore, I want to investigate objective measures for social presence.