header

Welcome


bdrp


Welcome to the Virtual Reality & Immersive Visualization Group
at RWTH Aachen University!

The Virtual Reality and Immersive Visualization Group started in 1998 as a service team in the RWTH IT Center. Since 2015, we are a research group (Lehr- und Forschungsgebiet) at i12 within the Computer Science Department. Moreover, the Group is member of the Visual Computing Institute and continues to be an integral part of the RWTH IT Center.

In a unique combination of research, teaching, services, and infrastructure, we provide Virtual Reality technologies and the underlying methodology as a powerful tool for scientific-technological applications.

In terms of basic research, we develop advanced methods and algorithms for multimodal 3D user interfaces and explorative analyses in virtual environments. Furthermore, we focus on application-driven, interdisciplinary research in collaboration with RWTH Aachen institutes, Forschungszentrum Jülich, research institutions worldwide, and partners from business and industry, covering fields like simulation science, production technology, neuroscience, and medicine.

To this end, we are members of / associated with the following institutes and facilities:

Our offices are located in the RWTH IT Center, where we operate one the largest Virtual Reality labs worldwide. The aixCAVE, a 30 sqm visualization chamber, makes it possible to interactively explore virtual worlds, is open to use by any RWTH Aachen research group.

News

21th ACM International Conference on Intelligent Virtual Agents (IVA21)

Andrea Bönsch presented a paper at the 21th ACM International Conference on Intelligent Virtual Agents. Additionally, her student David Hashem submitted a GALA video showcasing the respective application of a virtual museum's curator who either guides the user or accompanies the user on his or free exploration. The video won the ACM IVA 2021 GALA Audience Award. Congratulations!

Sept. 17, 2021

ACM Symposium on Applied Perception (SAP2021)

Jonathan Ehret presented joined work with the RWTH Institute for Hearing Technology and Acoustics and the Cologne IfL Phonetik on the Influence of Prosody and Embodiment on the Perceived Naturalness of a Conversational Agents’ Speech. During the peer-reviewing process, the paper was invited and accepted to the Journal Transactions on Applied Perception (TAP). Congratulations!

Sept. 16, 2021

ICAT-EGVE 2021

Andrea Bönsch presented a poster on Indirect User Guidance by Pedestrians in Virtual Environments during the International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments (ICAT-EGVE 2021).

Sept. 9, 2021

Kármán Conference: European Meeting on Intermediate Filaments

Prof. Reinhard Windoffer of the Institute of Molecular and Cellular Anatomy (MOCA) presented his group's research on Keratin intermediate filaments. Our group supported this work with an immersive visualization of the cytoskeletons.

Sept. 8, 2021

Med&BioVis Workshop 2021

Marcel Krüger gave a talk introducing "Insite - A Pipeline for the Interactive Analysis of Neuronal Network Simulations via NEST, TVB, and ARBOR" on the Med&BioVis Workshop of GI Fachgruppe Visual Computing in Biology and Medicine.

Sept. 3, 2021

DAGA 2021

Our group was involved in three presentations at this year's DAGA, 47th Annual Conference on Acoustics. While Jonathan Ehret talked about "Speech Source Directivity for Embodied Conversational Agents", our colleagues from RWTH Institute for Hearing Technology and Acoustics presented joint work on "Prosodic and Visual Naturalness of Dialogs Presented by Conversational Virtual Agents" and our AUDICTIVE project on listening to, and remembering conversations between two talkers.

Aug. 18, 2021

Recent Publications

pubimg
Do Prosody and Embodiment Influence the Perceived Naturalness of Conversational Agents' Speech?

Transactions on Applied Perception (TAP) [to be published]
presented at ACM Symposium on Applied Perception (SAP)

For conversational agents’ speech, all possible sentences have to be either prerecorded by voice actors or the required utterances can be synthesized. While synthesizing speech is more flexible and economic in production, it also potentially reduces the perceived naturalness of the agents amongst others due to mistakes at various linguistic levels. In our paper, we are interested in the impact of adequate and inadequate prosody, here particularly in terms of accent placement, on the perceived naturalness and aliveness of the agents. We compare (i) inadequate prosody, as generated by off-the-shelf text-to-speech (TTS) engines with synthetic output, (ii) the same inadequate prosody imitated by trained human speakers and (iii) adequate prosody produced by those speakers. The speech was presented either as audio-only or by embodied, anthropomorphic agents, to investigate the potential masking effect by a simultaneous visual representation of those virtual agents. To this end, we conducted an online study with 40 participants listening to four different dialogues each presented in the three Speech levels and the two Embodiment levels. Results confirmed that adequate prosody in human speech is perceived as more natural (and the agents are perceived as more alive) than inadequate prosody in both human (ii) and synthetic speech (i). Thus, it is not sufficient to just use a human voice for an agent’s speech to be perceived as natural - it is decisive whether the prosodic realisation is adequate or not. Furthermore, and surprisingly, we found no masking effect by speaker embodiment, since neither a human voice with inadequate prosody nor a synthetic voice was judged as more natural, when a virtual agent was visible compared to the audio-only condition. On the contrary, the human voice was even judged as less “alive” when accompanied by a virtual agent. In sum, our results emphasize on the one hand the importance of adequate prosody for perceived naturalness, especially in terms of accents being placed on important words in the phrase, while showing on the other hand that the embodiment of virtual agents plays a minor role in naturalness ratings of voices.

fadeout
 
pubimg
Being Guided or Having Exploratory Freedom: User Preferences of a Virtual Agent’s Behavior in a Museum

21th ACM International Conference on Intelligent Virtual Agents 2021 (IVA'21)

A virtual guide in an immersive virtual environment allows users a structured experience without missing critical information. However, although being in an interactive medium, the user is only a passive listener, while the embodied conversational agent (ECA) fulfills the active roles of wayfinding and conveying knowledge. Thus, we investigated for the use case of a virtual museum, whether users prefer a virtual guide or a free exploration accompanied by an ECA who imparts the same information compared to the guide. Results of a small within-subjects study with a head-mounted display are given and discussed, resulting in the idea of combining benefits of both conditions for a higher user acceptance. Furthermore, the study indicated the feasibility of the carefully designed scene and ECA’s appearance. We also submitted a GALA video entitled "An Introduction to the World of Internet Memes by Curator Kate: Guiding or Accompanying Visitors?" by D. Hashem, A. Bönsch, J. Ehret, and T.W. Kuhlen, showcasing our application.
**IVA 2021 GALA Audience Award**!

fadeout
 
pubimg
Compression and Rendering of Textured Point Clouds via Sparse Coding

High-Performance Graphics 2021

Splat-based rendering techniques produce highly realistic renderings from 3D scan data without prior mesh generation. Mapping high-resolution photographs to the splat primitives enables detailed reproduction of surface appearance. However, in many cases these massive datasets do not fit into GPU memory. In this paper, we present a compression and rendering method that is designed for large textured point cloud datasets. Our goal is to achieve compression ratios that outperform generic texture compression algorithms, while still retaining the ability to efficiently render without prior decompression. To achieve this, we resample the input textures by projecting them onto the splats and create a fixed-size representation that can be approximated by a sparse dictionary coding scheme. Each splat has a variable number of codeword indices and associated weights, which define the final texture as a linear combination during rendering. For further reduction of the memory footprint, we compress geometric attributes by careful clustering and quantization of local neighborhoods. Our approach reduces the memory requirements of textured point clouds by one order of magnitude, while retaining the possibility to efficiently render the compressed data.

fadeout
Disclaimer Home Visual Computing institute RWTH Aachen University