header

Profile


photo

Andrea Bönsch, M. Sc.
Room K110
Phone: +49 241 80 24922
Fax: +49 241 80 6 24922
Email: boensch@vr.rwth-aachen.de
Office hours: Please arrange an appointment via e-mail and briefly describe the topic to be discussed.

Lead Virtual Reality Team and Deputy Lead of research group

Andrea Bönsch is the representative of Prof. T.W. Kuhlen, group manager of the IT Center's Virtual Reality Group, head of the service team as well as a Ph.D. candidate at the Virtual Reality and Immersive Visualization group at RWTH Aachen University.

Research Focus

She received her Master's degree in 2015 at RWTH Aachen while working part-time in the Virtual Reality and Immersive Visualization group focusing on navigation techniques through immersive virtual environments. Now she conducts research in the field of Social VR by integrating virtual agents as advanced, emotional human interfaces into VR applications, focusing on the aspect of joint locomotion of social groups, i.e., groups of virtual agents and users who interact directly and plausible with other group members while standing or moving in the virtual scene. She has a special interest in training simulations and support systems in which virtual agents fulfill the roles of peers, coaches, or communication partners.

                            google scholar icon               ORCID iD iconorcid.org/0000-0001-5077-3675

Professional Activities

View/Hide the details.

Teaching

View/Hide the details.
  • Seminar Advisor, regularly in the group's seminars and pro-seminars
  • Lecturer, Virtual Humans part in "Advanced Topics on Virtual Reality (VR II)", since summer 2017
  • Teaching Assistant, regularly exercises for Virtual Reality (VR I and VR II)

Advised Internships and Master/Undergraduate Theses

View/Hide the details.
  • Joining a Virtual Tour: Group Navigation in a Single-User-Multi-Agent VR Setting
    (master thesis by Xiang Ye, 2023)
  • Rectricted Navigation in Pedestrian Flows
    (bachelor thesis by Lukas Zimmermann, 2022, poster)
  • Walking and Talking Side-by-Side with a Virtual Agent
    (bachelor thesis by Robin Schäfer, 2021)
  • Designing Plausible Pedestrian Flows to Guide Users Comfortably to Their Goal
    (master thesis by Julian Staab, 2022)
  • Novel Approach for Scene Exploration Through Indirect Pedestrian Group Guiding
    (bachelor thesis by Till Sittart, 2022, poster)
  • Impact of Different Sound Spatialization Levels on Presence in Urban Virtual Environments
    (bachelor thesis by Nikjas Hartmann, 2022)
  • Group Navigation with Virtual Agents
    (bachelor thesis by Kalina Daskalova, 2022)
  • Exploring a Virtual City with an Accompanying Guide
    (master thesis by Daniel Rupp, 2021)
  • Unaided Scene Exploration while being Guided by Pedestrians-as-Cues
    (master thesis by Xinyu Xia, 2021)
  • Dynamic, Environment-Aware Formations of Mobile Social Groups
    (bachelor thesis by Steffen Krüger, 2021)
  • Automatic Generation of World in Miniatures for Architectural, Social Virtual Reality Environments
    (master thesis by Radu-Andrei Coanda, 2021, poster)
  • Simulation of Realistic Crowds in Architectural Environments by means of Influence Maps and Visitor Patterns
    (bachelor thesis by Daniel Vonk, 2021)
  • Benchmarking Interactive Crowd Simulations for Virtual Environments in HMD and CAVE Settings
    (bachelor thesis by Danny Post, 2021)
  • Supplementing a User’s Free Scene Exploration by Dynamic Pedestrian Flows in Immersive Virtual Environments
    (bachelor thesis by Katharina S. Güths, 2021, poster)
  • Evaluation of Two Exploration Techniques with a Virtual Guide in a Virtual Reality Exhibition
    (bachelor thesis by David Hashem, 2021, paper)
  • Efficiency Comparison of Different Designs of a VR-based Mindfulness Exercise
    (bachelor thesis by Johanna Tolzmann, 2020)
  • Pre-operative Planning in Virtual Reality with Head Mounted Displays for Oral and Maxillofacial Surgery
    (master thesis by Filip Kajzer, 2020)
  • Self-Reliant Virtual Agents: Planning and Conducting Plausible Scene Interactions Alone or with Other Agents
    (master thesis by Marvin Kuhl, 2020)
  • Extending Virtual Reality Crowd Simulations: User-Awareness and Interactive Flow Control
    (bachelor thesis by Sebastian Jan Barton, 2019/2020, extended abstract)
  • Joining or Passing By? Inferring User Intent in Immersive Environments Crowded with Social Groups
    (bachelor thesis by Alexander R. Bluhm, 2019/2020, paper)
  • Advancing a Crowd Simulation Framework with Intelligent and Visual Analysis Methods for the Investigation of Simulation Data
    (master thesis by Marcel Jonda, 2018, workshop paper)
  • Empirische Untersuchung zweier Verhaltensmuster virtueller Agenten im Kontext nutzerangeforderter Assistenz in immersiven Umgebungen
    (bachelor thesis by Jan Hoffmann, 2016, workshop paper)
  • An Intelligent Recommendation System for an Efficient and Effective Control of Virtual Agents in a Wizard-of-Oz Paradigm
    (master thesis by Robert Trisnadi, 2016, poster)
  • Generating and Animating Virtual Humans
    (internship by Robert Trisnadi, 2016)
  • Do Not Invade - A Virtual Reality Framework to Design Personal Space Studies
    (bachelor thesis by Jan Schnathmeier, 2016, GI presentation)
  • Virtual Sightseeing - A Virtual Reality Framework for Visualizing Cities Based on CityGML
    (bachelor thesis by Timothy A.W. Blut, 2016)
  • Customization of the Appearance of Virtual Characters
    (internship by Yannick Donners, 2016)
  • Clippy revisited – Intelligent, Minimal Virtual Agents in Immersive Virtual Environments
    (bachelor thesis by David Gilbert, 2016)
  • Automated Generation of High-quality Collision-free Paths through Virtual Environments
    (bachelor thesis by Jonathan Wendt, 2014, co-supervised)
  • Concept and Implementation of Techniques for Interactively Experiencing Virtual Works of Art in Immersive Virtual Environments
    (master thesis by Dennis Scully, 2014, co-supervised, publication)
  • Integration eines effizienten Verfahrens zur korrekten Darstellung transparenter Objekte in ViSTA
    (bachelor thesis by Joachim Herber, 2013, co-supervised)

Education

View/Hide the details.
  • Trainer according to the German Trainer Aptitude Ordinance (AEVO), Chamber of Industry and Commerce Aachen, June 2015
  • Master of Science, Computer Science, RWTH Aachen University, January 2015
  • Mathematical and Technical Software Developer (MaTSE), Chamber of Industry and Commerce Aachen, August 2010
  • Bachelor of Science, Scientific Programming, University of Applied Sciences Aachen, July 2010



Publications


Wayfinding in Immersive Virtual Environments as Social Activity Supported by Virtual Agents


Andrea Bönsch, Jonathan Ehret, Daniel Rupp, Torsten Wolfgang Kuhlen
Frontiers in Virtual Reality, Section Virtual Reality and Human Behaviour
pubimg

Effective navigation and interaction within immersive virtual environments rely on thorough scene exploration. Therefore, wayfinding is essential, assisting users in comprehending their surroundings, planning routes, and making informed decisions. Based on real-life observations, wayfinding is, thereby, not only a cognitive process but also a social activity profoundly influenced by the presence and behaviors of others. In virtual environments, these 'others' are virtual agents (VAs), defined as anthropomorphic computer-controlled characters, who enliven the environment and can serve as background characters or direct interaction partners. However, little research has been done to explore how to efficiently use VAs as social wayfinding support. In this paper, we aim to assess and contrast user experience, user comfort, and the acquisition of scene knowledge through a between-subjects study involving n = 60 participants across three distinct wayfinding conditions in one slightly populated urban environment: (i) unsupported wayfinding, (ii) strong social wayfinding using a virtual supporter who incorporates guiding and accompanying elements while directly impacting the participants' wayfinding decisions, and (iii) weak social wayfinding using flows of VAs that subtly influence the participants' wayfinding decisions by their locomotion behavior. Our work is the first to compare the impact of VAs' behavior in virtual reality on users' scene exploration, including spatial awareness, scene comprehension, and comfort. The results show the general utility of social wayfinding support, while underscoring the superiority of the strong type. Nevertheless, further exploration of weak social wayfinding as a promising technique is needed. Thus, our work contributes to the enhancement of VAs as advanced user interfaces, increasing user acceptance and usability.

» Show BibTeX

@article{Boensch2024,
title={Wayfinding in Immersive Virtual Environments as Social Activity Supported by Virtual Agents},
author={B{\"o}nsch, Andrea and Ehret, Jonathan and Rupp, Daniel and Kuhlen, Torsten W.},
journal={Frontiers in Virtual Reality},
volume={4},
year={2024},
pages={1334795},
publisher={Frontiers},
doi={10.3389/frvir.2023.1334795}
}





StudyFramework: Comfortably Setting up and Conducting Factorial-Design Studies Using the Unreal Engine


Jonathan Ehret, Andrea Bönsch, Janina Fels, Sabine Janina Schlittmeier, Torsten Wolfgang Kuhlen
To be presented at Open Access Tools (OAT) and Libraries for Virtual Reality Workshop at IEEE Virtual Reality 2024
pubimg

Setting up and conducting user studies is fundamental to virtual reality research. Yet, often these studies are developed from scratch, which is time-consuming and especially hard and error-prone for novice developers. In this paper, we introduce the StudyFramework, a framework specifically designed to streamline the setup and execution of factorial-design VR-based user studies within the Unreal Engine, significantly enhancing the overall process. We elucidate core concepts such as setup, randomization, the experimenter view, and logging. After utilizing our framework to set up and conduct their respective studies, 11 study developers provided valuable feedback through a structured questionnaire. This feedback, which was generally positive, highlighting its simplicity and usability, is discussed in detail.

» Show Videos
» Show BibTeX

@ InProceedings{Ehret2024a,
author={Ehret, Jonathan and Bönsch, Andrea and Fels, Janina and
Schlittmeier, Sabine J. and Kuhlen, Torsten W.},
booktitle={2024 IEEE Conference on Virtual Reality and 3D User Interfaces
Abstracts and Workshops (VRW): Workshop "Open Access Tools and Libraries for
Virtual Reality"},
title={StudyFramework: Comfortably Setting up and Conducting
Factorial-Design Studies Using the Unreal Engine},
year={2024}
}





Is Embodiment of Background Noise Sources a Necessity?


Jonathan Ehret, Andrea Bönsch, Isabel Sarah Schiller, Carolin Breuer, Lukas Aspöck, Janina Fels, Sabine Janina Schlittmeier, Torsten Wolfgang Kuhlen
To be presented at Workshop on Virtual Humans and Crowds in Immersive Environments (VHCIE) at IEEE Virtual Reality 2024
pubimg

Exploring the synergy between visual and acoustic cues in virtual reality (VR) is crucial for elevating user engagement and perceived (social) presence. We present a study exploring the necessity and design impact of background sound source visualizations to guide the design of future soundscapes. To this end, we immersed n = 27 participants using a head-mounted display (HMD) within a virtual seminar room with six virtual peers and a virtual female professor. Participants engaged in a dual-task paradigm involving simultaneously listening to the professor and performing a secondary vibrotactile task, followed by recalling the heard speech content. We compared three types of background sound source visualizations in a within-subject design: no visualization, static visualization, and animated visualization. Participants’ subjective ratings indicate the importance of animated background sound source visualization for an optimal coherent audiovisual representation, particularly when embedding peer-emitted sounds. However, despite this subjective preference, audiovisual coherence did not affect participants’ performance in the dual-task paradigm measuring their listening effort.

» Show Videos
» Show BibTeX

@ InProceedings{Ehret2024b,
author={Ehret, Jonathan and Bönsch, Andrea and Schiller, Isabel S. and
Breuer, Carolin and Aspöck, Lukas and Fels, Janina and Schlittmeier, Sabine
J. and Kuhlen, Torsten W.},
booktitle={2024 IEEE Conference on Virtual Reality and 3D User Interfaces
Abstracts and Workshops (VRW): "Workshop on Virtual Humans and Crowds in
Immersive Environments (VHCIE)"},
title={Audiovisual Coherence: Is Embodiment of Background Noise Sources a
Necessity?},
year={2024}
}





Late-Breaking Report: VR-CrowdCraft: Coupling and Advancing Research in Pedestrian Dynamics and Social Virtual Reality


Andrea Bönsch, Maik Boltes, Anna Sieben, Torsten Wolfgang Kuhlen
to be presented at: IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2024
pubimg

VR-CrowdCraft is a newly formed interdisciplinary initiative, dedicated to the convergence and advancement of two distinct yet interconnected research fields: pedestrian dynamics (PD) and social virtual reality (VR). The initiative aims to establish foundational workflows for a systematic integration of PD data obtained from real-life experiments, encompassing scenarios ranging from smaller clusters of approximately ten individuals to larger groups comprising several hundred pedestrians, into immersive virtual environments (IVEs), addressing the following two crucial goals: (1) Advancing pedestrian dynamic analysis and (2) Advancing virtual pedestrian behavior: authentic populated IVEs and new PD experiments. The LBR presentation will focus on goal 1.




Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents


Jonathan Ehret, Andrea Bönsch, Patrick Nossol, Cosima A. Ermert, Chinthusa Mohanathasan, Sabine Janina Schlittmeier, Janina Fels, Torsten Wolfgang Kuhlen
ACM International Conference on Intelligent Virtual Agents (IVA ’23)
pubimg

Taking turns in a conversation is a delicate interplay of various signals, which we as humans can easily decipher. Embodied conversational agents (ECAs) communicating with humans should leverage this ability for smooth and enjoyable conversations. Extensive research has analyzed human turn-taking cues, and attempts have been made to predict turn-taking based on observed cues. These cues vary from prosodic, semantic, and syntactic modulation over adapted gesture and gaze behavior to actively used respiration. However, when generating such behavior for social robots or ECAs, often only single modalities were considered, e.g., gazing. We strive to design a comprehensive system that produces cues for all non-verbal modalities: gestures, gaze, and breathing. The system provides valuable cues without requiring speech content adaptation. We evaluated our system in a VR-based user study with N = 32 participants executing two subsequent tasks. First, we asked them to listen to two ECAs taking turns in several conversations. Second, participants engaged in taking turns with one of the ECAs directly. We examined the system’s usability and the perceived social presence of the ECAs' turn-taking behavior, both with respect to each individual non-verbal modality and their interplay. While we found effects of gesture manipulation in interactions with the ECAs, no effects on social presence were found.




This work is licensed under a Creative Commons Attribution International 4.0 License

» Show Videos
» Show BibTeX

@InProceedings{Ehret2023,
author = {Jonathan Ehret, Andrea Bönsch, Patrick Nossol, Cosima A. Ermert, Chinthusa Mohanathasan, Sabine J. Schlittmeier, Janina Fels and Torsten W. Kuhlen},
booktitle = {ACM International Conference on Intelligent Virtual Agents (IVA ’23)},
title = {Who's next? Integrating Non-Verbal Turn-Taking Cues for Embodied Conversational Agents},
year = {2023},
organization = {ACM},
pages = {8},
doi = {10.1145/3570945.3607312},
}





Effect of Head-Mounted Displays on Students’ Acquisition of Surgical Suturing Techniques Compared to an E-Learning and Tutor-Led Course: A Randomized Controlled Trial


Philipp Peters, Martin Lemos, Andrea Bönsch, Mark Ooms, Max Ulbrich, Ashkan Rashad, Felix Krause, Myriam Lipprandt, Torsten Wolfgang Kuhlen, Rainer Röhrig, Frank Hölzle, Behrus Puladi
International Journal of Surgery
pubimg

Background: Although surgical suturing is one of the most important basic skills, many medical school graduates do not acquire sufficient knowledge of it due to its lack of integration into the curriculum or a shortage of tutors. E-learning approaches attempt to address this issue but still rely on the involvement of tutors. Furthermore, the learning experience and visual-spatial ability appear to play a critical role in surgical skill acquisition. Virtual reality head-mounted displays (HMDs) could address this, but the benefits of immersive and stereoscopic learning of surgical suturing techniques are still unclear.

Material and Methods: In this multi-arm randomized controlled trial, 150 novices participated. Three teaching modalities were compared: an e-learning course (monoscopic), an HMD-based course (stereoscopic, immersive), both self-directed, and a tutor-led course with feedback. Suturing performance was recorded by video camera both before and after course participation (>26 hours of video material) and assessed in a blinded fashion using the OSATS Global Rating Score (GRS). Furthermore, the optical flow of the videos was determined using an algorithm. The number of sutures performed was counted, visual-spatial ability was measured with the mental rotation test (MRT), and courses were assessed with questionnaires.

Results: Students' self-assessment in the HMD-based course was comparable to that of the tutor-led course and significantly better than in the e-learning course (P=0.003). Course suitability was rated best for the tutor-led course (x=4.8), followed by the HMD-based (x=3.6) and e-learning (x=2.5) courses. The median GRS between courses was comparable (P=0.15) at 12.4 (95% CI 10.0–12.7) for the e-learning course, 14.1 (95% CI 13.0-15.0) for the HMD-based course, and 12.7 (95% CI 10.3-14.2) for the tutor-led course. However, the GRS was significantly correlated with the number of sutures performed during the training session (P=0.002), but not with visual-spatial ability (P=0.626). Optical flow (R2=0.15, P<0.001) and the number of sutures performed (R2=0.73, P<0.001) can be used as additional measures to GRS.

Conclusion: The use of HMDs with stereoscopic and immersive video provides advantages in the learning experience and should be preferred over a traditional web application for e-learning. Contrary to expectations, feedback is not necessary for novices to achieve a sufficient level in suturing; only the number of surgical sutures performed during training is a good determinant of competence improvement. Nevertheless, feedback still enhances the learning experience. Therefore, automated assessment as an alternative feedback approach could further improve self-directed learning modalities. As a next step, the data from this study could be used to develop such automated AI-based assessments.

» Show BibTeX

@Article{Peters2023,
author = {Philipp Peters and Martin Lemos and Andrea Bönsch and Mark Ooms and Max Ulbrich and Ashkan Rashad and Felix Krause and Myriam Lipprandt and Torsten Wolfgang Kuhlen and Rainer Röhrig and Frank Hölzle and Behrus Puladi},
journal = {International Journal of Surgery},
title = {Effect of head-mounted displays on students' acquisition of surgical suturing techniques compared to an e-learning and tutor-led course: A randomized controlled trial},
year = {2023},
month = {may},
volume = {Publish Ahead of Print},
creationdate = {2023-05-12T11:00:37},
doi = {10.1097/js9.0000000000000464},
modificationdate = {2023-05-12T11:00:37},
publisher = {Ovid Technologies (Wolters Kluwer Health)},
}





Voice Quality and its Effects on University Students' Listening Effort in a Virtual Seminar Room


Isabel Sarah Schiller, Lukas Aspöck, Carolin Breuer, Jonathan Ehret, Andrea Bönsch, Janina Fels, Torsten Wolfgang Kuhlen, Sabine Janina Schlittmeier
Acoustics 2023, The Journal of the Acoustical Society of America

A teacher’s poor voice quality may increase listening effort in pupils, but it is unclear whether this effect persists in adult listeners. Thus, the goal of this study is to examine the impact of vocal hoarseness on university students' listening effort in a virtual seminar room. An audio-visual immersive virtual reality environment is utilized to simulate a typical seminar room with common background sounds and fellow students represented as wooden mannequins. Participants wear a head-mounted display and are equipped with two controllers to engage in a dual-task paradigm. The primary task is to listen to a virtual professor reading short texts and retain relevant content information to be recalled later. The texts are presented either in a normal or an imitated hoarse voice. In parallel, participants perform a secondary task which is responding to tactile vibration patterns via the controllers. It is hypothesized that listening to the hoarse voice induces listening effort, resulting in more cognitive resources needed for primary task performance while secondary task performance is hindered. Results are presented and discussed in light of students’ cognitive performance and listening challenges in higher education learning environments.

» Show BibTeX

@INPROCEEDINGS{Schiller:977871,
author = {Schiller, Isabel Sarah and Aspöck, Lukas and Breuer,
Carolin and Ehret, Jonathan and Bönsch, Andrea and Fels,
Janina and Kuhlen, Torsten and Schlittmeier, Sabine Janina},
title = {{V}oice Quality and its Effects on University
Students' Listening Effort in a Virtual Seminar Room},
year = {2023},
month = {Dec},
date = {2023-12-04},
organization = {Acoustics 2023, Sydney (Australia), 4
Dec 2023 - 8 Dec 2023},
doi = {10.1121/10.0022982}
}





Advantages of a Training Course for Surgical Planning in Virtual Reality in Oral and Maxillofacial Surgery


Max Ulbrich, Vincent Van den Bosch, Andrea Bönsch, Lennart Gruber, Mark Ooms, Claire Melchior, Ila Motmean, Caroline Wilpert, Rashad Ashkan, Torsten Wolfgang Kuhlen, Frank Hölzle, Behrus Puladi
JMIR Serious Games
pubimg

Background: As an integral part of computer-assisted surgery, virtual surgical planning(VSP) leads to significantly better surgery results, such as for oral and maxillofacial reconstruction with microvascular grafts of the fibula or iliac crest. It is performed on a 2D computer desktop (DS) based on preoperative medical imaging. However, in this environment, VSP is associated with shortcomings, such as a time-consuming planning process and the requirement of a learning process. Therefore, a virtual reality VR)-based VSP application has great potential to reduce or even overcome these shortcomings due to the benefits of visuospatial vision, bimanual interaction, and full immersion. However, the efficacy of such a VR environment has not yet been investigated.

Objective: Does VR offer advantages in learning process and working speed while providing similar good results compared to a traditional DS working environment?

Methods: During a training course, novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis) for the segmentation of fibulae and os coxae (n = 156), and they were askedto carry out the maneuvers as accurately and quickly as possible. The individual learning processes in both environments were compared usingobjective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by two experienced radiologists in a blinded manner (score).

Results: During a training course, novices were taught how to use a software application in a DS environment (3D Slicer) and in a VR environment (Elucis)for the segmentation of fibulae and os coxae (n = 156), and they were asked to carry out the maneuvers as accurately and quickly as possible. The individual learning processes in both environments were compared using objective criteria (time and segmentation performance) and self-reported questionnaires. The models resulting from the segmentation were compared mathematically (Hausdorff distance and Dice coefficient) and evaluated by two experienced radiologists in a blinded manner (score).

Conclusions: The more rapid learning process and the ability to work faster in the VR environment could save time and reduce the VSP workload, providing certain advantages over the DS environment.

» Show BibTeX

@article{Ulbrich2022,
title={Advantages of a Training Course for Surgical Planning in Virtual
Reality in Oral and Maxillofacial Surgery },
author={ Ulbrich, M., Van den Bosch, V., Bönsch, A., Gruber, L.J., Ooms,
M., Melchior, C., Motmaen, I., Wilpert, C., Rashad, A., Kuhlen, T.W.,
Hölzle, F., Puladi, B.},
journal={JMIR Serious Games},
volume={ 28/11/2022:40541 (forthcoming/in press) },
year={2022},
publisher={JMIR Publications Inc., Toronto, Canada}
}





Poster: Enhancing Proxy Localization in World in Miniatures Focusing on Virtual Agents


Andrea Bönsch, Radu-Andrei Coanda, Torsten Wolfgang Kuhlen
Virtuelle und Erweiterte Realität, Workshop der GI-Fachgruppe VR/AR (2023)
pubimg

Virtual agents (VAs) are increasingly utilized in large-scale architectural immersive virtual environments (LAIVEs) to enhance user engagement and presence. However, challenges persist in effectively localizing these VAs for user interactions and optimally orchestrating them for an interactive experience. To address these issues, we propose to extend world in miniatures (WIMs) through different localization and manipulation techniques as these 3D miniature scene replicas embedded within LAIVEs have already demonstrated effectiveness for wayfinding, navigation, and object manipulation. The contribution of our ongoing research is thus the enhancement of manipulation and localization capabilities within WIMs, focusing on the use case of VAs.

» Show BibTeX

@InProceedings{Boensch2023c,
author = {Andrea Bönsch, Radu-Andrei Coanda, and Torsten W.
Kuhlen},
booktitle = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 14.
{W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
title = {Enhancing Proxy Localization in World in
Miniatures Focusing on Virtual Agents},
year = {2023},
organization = {Gesellschaft für Informatik e.V.},
doi = {10.18420/vrar2023_3381}
}





Poster: Whom Do You Follow? Pedestrian Flows Constraining the User’s Navigation during Scene Exploration


Andrea Bönsch, Lukas B. Zimmermann, Jonathan Ehret, Torsten Wolfgang Kuhlen
23rd ACM International Conference on Intelligent Virtual Agents
pubimg

In this work-in-progress, we strive to combine two wayfinding techniques supporting users in gaining scene knowledge, namely (i) the River Analogy, in which users are considered as boats automatically floating down predefined rivers, e.g., streets in an urban scene, and (ii) virtual pedestrian flows as social cues indirectly guiding users through the scene. In our combined approach, the pedestrian flows function as rivers. To navigate through the scene, users leash themselves to a pedestrian of choice, considered as boat, and are dragged along the flow towards an area of interest. Upon arrival, users can detach themselves to freely explore the site without navigational constraints. We briefly outline our approach, and discuss the results of an initial study focusing on various leashing visualizations.

» Show BibTeX

@InProceedings{Boensch2023b,
author = {Andrea Bönsch, Lukas B. Zimmermann, Jonathan Ehret, and Torsten W.Kuhlen},
booktitle = {ACM International Conferenceon Intelligent Virtual Agents (IVA ’23)},
title = {Whom Do You Follow? Pedestrian Flows Constraining the User’sNavigation during Scene Exploration},
year = {2023},
organization = {ACM},
pages = {3},
doi = {10.1145/3570945.3607350},
}





Poster: Where Do They Go? Overhearing Conversing Pedestrian Groups during Scene Exploration


Andrea Bönsch, Till Sittart, Jonathan Ehret, Torsten Wolfgang Kuhlen
23rd ACM International Conference on Intelligent Virtual Agents
pubimg

On entering an unknown immersive virtual environment, a user’s first task is gaining knowledge about the respective scene, termed scene exploration. While many techniques for aided scene exploration exist, such as virtual guides, or maps, unaided wayfinding through pedestrians-as-cues is still in its infancy. We contribute to this research by indirectly guiding users through pedestrian groups conversing about their target location. A user who overhears the conversation without being a direct addressee can consciously decide whether to follow the group to reach an unseen point of interest. We outline our approach and give insights into the results of a first feasibility study in which we compared our new approach to non-talkative groups and groups conversing about random topics.

» Show BibTeX

@InProceedings{Boensch2023a,
author = {Andrea Bönsch, Till Sittart, Jonathan Ehret, and Torsten W. Kuhlen},
booktitle = {ACM International Conference on Intelligent VirtualAgents (IVA ’23)},
title = {Where Do They Go? Overhearing Conversing Pedestrian Groups duringScene Exploration},
year = {2023},
pages = {3},
publisher = {ACM},
doi = {10.1145/3570945.3607351},
}





Poster: Hoarseness among university professors and how it can influence students’ listening impression: an audio-visual immersive VR study


Isabel Sarah Schiller, Lukas Aspöck, Carolin Breuer, Jonathan Ehret, Andrea Bönsch
AUDICTIVE Conference 2023
pubimg

For university students, following a lecture can be challenging when room acoustic conditions are poor or when their professor suffers from a voice disorder. Related to the high vocal demands of teaching, university professors develop voice disorders quite frequently. The key symptom is hoarseness. The aim of this study is to investigate the effect of hoarseness on university students’ subjective listening effort and listening impression using audio-visual immersive virtual reality (VR) including a real-time room simulation of a typical seminar room. Equipped with a head-mounted display, participants are immersed in the virtual seminar room, with typical binaural background sounds, where they perform a listening task. This task involves comprehending and recalling information from text, read aloud by a female virtual professor positioned in front of the seminar room. Texts are presented in two experimental blocks, one of them read aloud in a normal (modal) voice, the other one in a hoarse voice. After each block, participants fill out a questionnaire to evaluate their perceived listening effort and overall listening impression under the respective voice quality, as well as the human-likeliness of and preferences towards the virtual professor. Results are presented and discussed regarding voice quality design for virtual tutors and potential impli-cations for students’ motivation and performance in academic learning spaces.

» Show BibTeX

@InProceedings{Schiller2023Audictive,
author = {Isabel S. Schiller, Lukas Aspöck, Carolin Breuer,
Jonathan Ehret and Andrea Bönsch},
booktitle = {Proceedings of the 1st AUDICTIVE Conference},
title = {Hoarseness among university professors and how it can
influence students’ listening impression: an audio-visual immersive VR
study},
year = {2023},
pages = {134-137},
doi = { 10.18154/RWTH-2023-08885},
}





Does a Talker's Voice Quality Affect University Students' Listening Effort in a Virtual Seminar Room?


Isabel Sarah Schiller, Andrea Bönsch, Jonathan Ehret, Carolin Breuer, Lukas Aspöck
Forum Acusticum 2023
pubimg

A university professor's voice quality can either facilitate or impede effective listening in students. In this study, we investigated the effect of hoarseness on university students’ listening effort in seminar rooms using audio-visual virtual reality (VR). During the experiment, participants were immersed in a virtual seminar room with typical background sounds and performed a dual-task paradigm involving listening to and answering questions about short stories, narrated by a female virtual professor, while responding to tactile vibration patterns. In a within-subject design, the professor's voice quality was varied between normal and hoarse. Listening effort was assessed based on performance and response time measures in the dual-task paradigm and participants’ subjective evaluation. It was hypothesized that listening to a hoarse voice leads to higher listening effort. While the analysis is still ongoing, our preliminary results show that listening to the hoarse voice significantly increased perceived listening effort. In contrast, the effect of voice quality was not significant in the dual-task paradigm. These findings indicate that, even if students' performance remains unchanged, listening to hoarse university professors may still require more effort.

» Show BibTeX

@INBOOK{Schiller:977866,
author = {Schiller, Isabel Sarah and Bönsch, Andrea and Ehret,
Jonathan and Breuer, Carolin and Aspöck, Lukas},
title = {{D}oes a talker's voice quality affect university
students' listening effort in a virtual seminar room?},
address = {Turin},
publisher = {European Acoustics Association},
pages = {2813-2816},
year = {2024},
booktitle = {Proceedings of the 10th Convention of
the European Acoustics Association :
Forum Acusticum 2023. Politecnico di
Torino, Torino, Italy, September 11 -
15, 2023 / Editors: Arianna Astolfi,
Francesco Asdrudali, Louena Shtrepi},
month = {Sep},
date = {2023-09-11},
organization = {10. Convention of the European
Acoustics Association : Forum
Acusticum, Turin (Italy), 11 Sep 2023 -
15 Sep 2023},
doi = {10.61782/fa.2023.0320},
}





Quantitative Mapping of Keratin Networks in 3D


Reinhard Windoffer, Nicole Schwarz, Sungjun Yoon, Teodora Piskova, Michael Scholkemper, Johannes Stegmaier, Andrea Bönsch, Jacopo Di Russo, Rudolf E. Leube
eLife
pubimg

Mechanobiology requires precise quantitative information on processes taking place in specific 3D microenvironments. Connecting the abundance of microscopical, molecular, biochemical, and cell mechanical data with defined topologies has turned out to be extremely difficult. Establishing such structural and functional 3D maps needed for biophysical modeling is a particular challenge for the cytoskeleton, which consists of long and interwoven filamentous polymers coordinating subcellular processes and interactions of cells with their environment. To date, useful tools are available for the segmentation and modeling of actin filaments and microtubules but comprehensive tools for the mapping of intermediate filament organization are still lacking. In this work, we describe a workflow to model and examine the complete 3D arrangement of the keratin intermediate filament cytoskeleton in canine, murine, and human epithelial cells both, in vitro and in vivo. Numerical models are derived from confocal Airyscan high-resolution 3D imaging of fluorescence-tagged keratin filaments. They are interrogated and annotated at different length scales using different modes of visualization including immersive virtual reality. In this way, information is provided on network organization at the subcellular level including mesh arrangement, density, and isotropic configuration as well as details on filament morphology such as bundling, curvature, and orientation. We show that the comparison of these parameters helps to identify, in quantitative terms, similarities and differences of keratin network organization in epithelial cell types defining subcellular domains, notably basal, apical, lateral, and perinuclear systems. The described approach and the presented data are pivotal for generating mechanobiological models that can be experimentally tested.

» Show BibTeX

@article {Windoffer2022,
article_type = {journal},
title = {{Quantitative Mapping of Keratin Networks in 3D}},
author = {Windoffer, Reinhard and Schwarz, Nicole and Yoon, Sungjun and Piskova, Teodora and Scholkemper, Michael and Stegmaier, Johannes and Bönsch, Andrea and Di Russo, Jacopo and Leube, Rudolf},
editor = {Coulombe, Pierre},
volume = 11,
year = 2022,
month = {feb},
pub_date = {2022-02-18},
pages = {e75894},
citation = {eLife 2022;11:e75894},
doi = {10.7554/eLife.75894},
url = {https://doi.org/10.7554/eLife.75894},
journal = {eLife},
issn = {2050-084X},
publisher = {eLife Sciences Publications, Ltd},
}





Late-Breaking Report: Natural Turn-Taking with Embodied Conversational Agents


Jonathan Ehret, Andrea Bönsch, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2022

Adding embodied conversational agents (ECAs) to immersive virtual environments (IVEs) becomes relevant in various application scenarios, for example, conversational systems. For successful interactions with these ECAs, they have to behave naturally, i.e. in the way a user would expect a real human to behave. Teaming up with acousticians and psychologists, we strive to explore turn-taking in VR-based interactions between either two ECAs or an ECA and a human user.




Late-Breaking Report: An Embodied Conversational Agent Supporting Scene Exploration by Switching between Guiding and Accompanying


Andrea Bönsch, Daniel Rupp, Jonathan Ehret, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2022
pubimg

In this late-breaking report, we first motivate the requirement of an embodied conversational agent (ECA) who combines characteristics of a virtual tour guide and a knowledgeable companion in order to allow users an interactive and adaptable, however, structured exploration of an unknown immersive, architectural environment. Second, we roughly outline our proposed ECA’s behavioral design followed by a teaser on the planned user study.




Do Prosody and Embodiment Influence the Perceived Naturalness of Conversational Agents' Speech?


Jonathan Ehret, Andrea Bönsch, Lukas Aspöck, Christine T. Röhr, Stefan Baumann, Martine Grice, Janina Fels, Torsten Wolfgang Kuhlen
Transactions on Applied Perception (TAP)
presented at ACM Symposium on Applied Perception (SAP)
pubimg

For conversational agents’ speech, all possible sentences have to be either prerecorded by voice actors or the required utterances can be synthesized. While synthesizing speech is more flexible and economic in production, it also potentially reduces the perceived naturalness of the agents amongst others due to mistakes at various linguistic levels. In our paper, we are interested in the impact of adequate and inadequate prosody, here particularly in terms of accent placement, on the perceived naturalness and aliveness of the agents. We compare (i) inadequate prosody, as generated by off-the-shelf text-to-speech (TTS) engines with synthetic output, (ii) the same inadequate prosody imitated by trained human speakers and (iii) adequate prosody produced by those speakers. The speech was presented either as audio-only or by embodied, anthropomorphic agents, to investigate the potential masking effect by a simultaneous visual representation of those virtual agents. To this end, we conducted an online study with 40 participants listening to four different dialogues each presented in the three Speech levels and the two Embodiment levels. Results confirmed that adequate prosody in human speech is perceived as more natural (and the agents are perceived as more alive) than inadequate prosody in both human (ii) and synthetic speech (i). Thus, it is not sufficient to just use a human voice for an agent’s speech to be perceived as natural - it is decisive whether the prosodic realisation is adequate or not. Furthermore, and surprisingly, we found no masking effect by speaker embodiment, since neither a human voice with inadequate prosody nor a synthetic voice was judged as more natural, when a virtual agent was visible compared to the audio-only condition. On the contrary, the human voice was even judged as less “alive” when accompanied by a virtual agent. In sum, our results emphasize on the one hand the importance of adequate prosody for perceived naturalness, especially in terms of accents being placed on important words in the phrase, while showing on the other hand that the embodiment of virtual agents plays a minor role in naturalness ratings of voices.

» Show Videos
» Show BibTeX

@article{Ehret2021a,
author = {Ehret, Jonathan and B\"{o}nsch, Andrea and Asp\"{o}ck, Lukas and R\"{o}hr, Christine T. and Baumann, Stefan and Grice, Martine and Fels, Janina and Kuhlen, Torsten W.},
title = {Do Prosody and Embodiment Influence the Perceived Naturalness of Conversational Agents’ Speech?},
journal = {ACM transactions on applied perception},
year = {2021},
issue_date = {October 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {18},
number = {4},
articleno = {21},
issn = {1544-3558},
url = {https://doi.org/10.1145/3486580},
doi = {10.1145/3486580},
numpages = {15},
keywords = {speech, audio, accentuation, prosody, text-to-speech, Embodied conversational agents (ECAs), virtual acoustics, embodiment}
}





Being Guided or Having Exploratory Freedom: User Preferences of a Virtual Agent’s Behavior in a Museum


Andrea Bönsch, David Hashem, Jonathan Ehret, Torsten Wolfgang Kuhlen
21th ACM International Conference on Intelligent Virtual Agents 2021 (IVA'21)
pubimg

A virtual guide in an immersive virtual environment allows users a structured experience without missing critical information. However, although being in an interactive medium, the user is only a passive listener, while the embodied conversational agent (ECA) fulfills the active roles of wayfinding and conveying knowledge. Thus, we investigated for the use case of a virtual museum, whether users prefer a virtual guide or a free exploration accompanied by an ECA who imparts the same information compared to the guide. Results of a small within-subjects study with a head-mounted display are given and discussed, resulting in the idea of combining benefits of both conditions for a higher user acceptance. Furthermore, the study indicated the feasibility of the carefully designed scene and ECA’s appearance.

We also submitted a GALA video entitled "An Introduction to the World of Internet Memes by Curator Kate: Guiding or Accompanying Visitors?" by D. Hashem, A. Bönsch, J. Ehret, and T.W. Kuhlen, showcasing our application.
IVA 2021 GALA Audience Award!

» Show Videos
» Show BibTeX

@inproceedings{Boensch2021b,
author = {B\"{o}nsch, Andrea and Hashem, David and Ehret, Jonathan and Kuhlen, Torsten W.},
title = {{Being Guided or Having Exploratory Freedom: User Preferences of a Virtual Agent's Behavior in a Museum}},
year = {2021},
isbn = {9781450386197},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3472306.3478339},
doi = {10.1145/3472306.3478339},
booktitle = {{Proceedings of the 21th ACM International Conference on Intelligent Virtual Agents}},
pages = {33–40},
numpages = {8},
keywords = {virtual agents, enjoyment, guiding, virtual reality, free exploration, museum, embodied conversational agents},
location = {Virtual Event, Japan},
series = {IVA '21}
}





Poster: Indircet User Guidance by Pedestrians in Virtual Environments


Andrea Bönsch, Katharina Güths, Jonathan Ehret, Torsten Wolfgang Kuhlen
ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
pubimg

Scene exploration allows users to acquire scene knowledge on entering an unknown virtual environment. To support users in this endeavor, aided wayfinding strategies intentionally influence the user’s wayfinding decisions through, e.g., signs or virtual guides.

Our focus, however, is an unaided wayfinding strategy, in which we use virtual pedestrians as social cues to indirectly and subtly guide users through virtual environments during scene exploration. We shortly outline the required pedestrians’ behavior and results of a first feasibility study indicating the potential of the general approach.

» Show Videos
» Show BibTeX

@inproceedings {Boensch2021a,
booktitle = {ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos},
editor = {Maiero, Jens and Weier, Martin and Zielasko, Daniel},
title = {{Indirect User Guidance by Pedestrians in Virtual Environments}},
author = {Bönsch, Andrea and Güths, Katharina and Ehret, Jonathan and Kuhlen, Torsten W.},
year = {2021},
publisher = {The Eurographics Association},
ISSN = {1727-530X},
ISBN = {978-3-03868-159-5},
DOI = {10.2312/egve.20211336}
}





Poster: Prosodic and Visual Naturalness of Dialogs Presented by Conversational Virtual Agents


Lukas Aspöck, Jonathan Ehret, Stefan Baumann, Andrea Bönsch, Christine T. Röhr, Martine Grice, Torsten Wolfgang Kuhlen, Janina Fels
DAGA 2021 - 47. Jahrestagung für Akustik

Conversational virtual agents, with and without visual representation, are becoming more present in our daily life, e.g. as intelligent virtual assistants on smart devices. To investigate the naturalness of both the speech and the nonverbal behavior of embodied conversational agents (ECAs), an interdisciplinary research group was initiated, consisting of phoneticians, computer scientists, and acoustic engineers. For a web-based pilot experiment, simple dialogs between a male and a female speaker were created, with three prosodic conditions. For condition 1, the dialog was created synthetically using a text-to-speech engine. In the other two prosodic conditions (2,3) human speakers were recorded with 2) the erroneous accentuation of the text-to-speech synthesis of condition 1, and 3) with a natural accentuation. Face tracking data of the recorded speakers was additionally obtained and applied as input data for the facial animation of the ECAs. Based on the recorded data, auralizations in a virtual acoustic environment were generated and presented as binaural signals to the participants either in combination with the visual representation of the ECAs as short videos or without any visual feedback. A preliminary evaluation of the participants’ responses to questions related to naturalness, presence, and preference is presented in this work.

» Show BibTeX

@inproceedings{Aspoeck2021,
author = {Asp\"{o}ck, Lukas and Ehret, Jonathan and Baumann, Stefan and B\"{o}nsch, Andrea and R\"{o}hr, Christine T. and Grice, Martine and Kuhlen, Torsten W. and Fels, Janina},
title = {Prosodic and Visual Naturalness of Dialogs Presented by Conversational Virtual Agents},
year = {2021},
note = {Hybride Konferenz},
month = {Aug},
date = {2021-08-15},
organization = {47. Jahrestagung für Akustik, Wien (Austria), 15 Aug 2021 - 18 Aug 2021},
url = {https://vr.rwth-aachen.de/publication/02207/}
}





Talk: Numerical Analysis of Keratin Networks in Selected Cell Types


Reinhard Windoffer, Nicole Schwarz, Sungjun Yoon, Teodora Piskova, Michael Scholkemper, Michael Thomas Schaub, Michael Anhuth, Andrea Bönsch, Till Petersen-Krauß, Johannes Stegmaier, Jacopo Di Russo, Rudolf E. Leube
Kármán Conference: European Meeting on Intermediate Filaments
pubimg

Keratin intermediate filaments make up the main intracellular cytoskeletal network of epithelia and provide, together with their associated desmosomal cell-cell adhesions, mechanical resilience. Remarkable differences in keratin network topology have been noted in different epithelial cell types ranging from a well-defined subapical network in enterocytes to pancytoplasmic networks in keratinocytes. In addition, functional states and biophysical, biochemical, and microbial stress have been shown to affect network organization. To gain insight into the importance of network topology for cellular function and resilience, quantification of 3D keratin network topology is needed.

We used Airyscan superresolution microscopy to record image stacks with an x/y resolution of 120 nm and axial resolution of 350 nm in canine kidney-derived MDCK cells, human epidermal keratinocytes, and murine retinal pigment epithelium (RPE) cells. Established segmentation algorithms (TSOAX) were implemented in combination with additional analysis tools to create a numerical representation of the keratin network topology in the different cell types. The resulting representation contains the XYZ position of all filament segment vertices together with data on filament thickness and information on the connecting nodes. This allows the statistical analysis of network parameters such as length, density, orientation, and mesh size. Furthermore, the network can be rendered in standard 3D software, which makes it accessible at hitherto unattained quality in 3D. Comparison of the three analyzed cell types reveals significant numerical differences in various parameters.



Listening to, and remembering conversations between two talkers: Cognitive research using embodied conversational agents in audiovisual virtual environments


Janina Fels, Cosima A. Ermert, Jonathan Ehret, Chinthusa Mohanathasan, Andrea Bönsch, Torsten Wolfgang Kuhlen, Sabine Janina Schlittmeier
DAGA 2021 - 47. Jahrestagung für Akustik
Fortschritte der Akustik - DAGA 2021
Herausgeberin: Deutsche Gesellschaft für Akustik e.V. (DEGA), Berlin, 2021
Wissenschaftliche Edition: Holger Waubke und Peter Balazs
ISBN: 978-3-939296-18-8
Online-Publikation, Zugangsdaten auf Anfrage bei tagungen@dega-akustik.de
pubimg

In the AUDICTIVE project about listening to, and remembering the content of conversations between two talkers we aim to investigate the combined effects of potentially performance-relevant but scarcely addressed audiovisual cues on memory and comprehension for running speech. Our overarching methodological approach is to develop an audiovisual Virtual Reality testing environment that includes embodied Virtual Agents (VAs). This testing environment will be used in a series of experiments to research the basic aspects of audiovisual cognitive performance in a close(r)-to-real-life setting. We aim to provide insights into the contribution of acoustical and visual cues on the cognitive performance, user experience, and presence as well as quality and vibrancy of VR applications, especially those with a social interaction focus. We will study the effects of variations in the audiovisual ’realism’ of virtual environments on memory and comprehension of multi-talker conversations and investigate how fidelity characteristics in audiovisual virtual environments contribute to the realism and liveliness of social VR scenarios with embodied VAs. Additionally, we will study the suitability of text memory, comprehension measures, and subjective judgments to assess the quality of experience of a VR environment. First steps of the project with respect to the general idea of AUDICTIVE are presented.

» Show BibTeX

@ inproceedings {Fels2021,
author = {Fels, Janina and Ermert, Cosima A. and Ehret, Jonathan and Mohanathasan, Chinthusa and B\"{o}nsch, Andrea and Kuhlen, Torsten W. and Schlittmeier, Sabine J.},
title = {Listening to, and Remembering Conversations between Two Talkers: Cognitive Research using Embodied Conversational Agents in Audiovisual Virtual Environments},
address = {Berlin},
publisher = {Deutsche Gesellschaft für Akustik e.V. (DEGA)},
pages = {1328-1331},
year = {2021},
booktitle = {[Fortschritte der Akustik - DAGA 2021, DAGA 2021, 2021-08-15 - 2021-08-18, Wien, Austria]},
month = {Aug},
date = {2021-08-15},
organization = {47. Jahrestagung für Akustik, Wien (Austria), 15 Aug 2021 - 18 Aug 2021},
url = {https://vr.rwth-aachen.de/publication/02206/}
}





Talk: Speech Source Directivity for Embodied Conversational Agents


Jonathan Ehret, Lukas Aspöck, Andrea Bönsch, Janina Fels, Torsten Wolfgang Kuhlen
DAGA 2021 - 47. Jahrestagung für Akustik
pubimg

Embodied conversational agents (ECAs) are computer-controlled characters who communicate with a human using natural language. Being represented as virtual humans, ECAs are often utilized in domains such as training, therapy, or guided tours while being embedded in an immersive virtual environment. Having plausible speech sound is thereby desirable to improve the overall plausibility of these virtual-reality-based simulations. In an audiovisual VR experiment, we investigated the impact of directional radiation for the produced speech on the perceived naturalism. Furthermore, we examined how directivity filters influence the perceived social presence of participants in interactions with an ECA. Therefor we varied the source directivity between 1) being omnidirectional, 2) featuring the average directionality of a human speaker, and 3) dynamically adapting to the currently produced phonemes. Our results indicate that directionality of speech is noticed and rated as more natural. However, no significant change of perceived naturalness could be found when adding dynamic, phoneme-dependent directivity. Furthermore, no significant differences on social presence were measurable between any of the three conditions.

» Show Videos
» Show BibTeX

Bibtex:
@misc{Ehret2021b,
author = {Ehret, Jonathan and Aspöck, Lukas and B\"{o}nsch, Andrea and Fels, Janina and Kuhlen, Torsten W.},
title = {Speech Source Directivity for Embodied Conversational Agents},
publisher = {IHTA, Institute for Hearing Technology and Acoustics},
year = {2021},
note = {Hybride Konferenz},
month = {Aug},
date = {2021-08-15},
organization = {47. Jahrestagung für Akustik, Wien (Austria), 15 Aug 2021 - 18 Aug 2021},
subtyp = {Video},
url = {https://vr.rwth-aachen.de/publication/02205/}
}





Inferring a User’s Intent on Joining or Passing by Social Groups


Andrea Bönsch, Alexander R. Bluhm, Jonathan Ehret, Torsten Wolfgang Kuhlen
20th ACM International Conference on Intelligent Virtual Agents 2020 (IVA'20)
pubimg

Modeling the interactions between users and social groups of virtual agents (VAs) is vital in many virtual-reality-based applications. However, only little research on group encounters has been conducted yet. We intend to close this gap by focusing on the distinction between joining and passing-by a group. To enhance the interactive capacity of VAs in these situations, knowing the user’s objective is required to showreasonable reactions. To this end,we propose a classification scheme which infers the user’s intent based on social cues such as proxemics, gazing and orientation, followed by triggering believable, non-verbal actions on the VAs.We tested our approach in a pilot study with overall promising results and discuss possible improvements for further studies.

» Show Videos
» Show BibTeX

@inproceedings{10.1145/3383652.3423862,
author = {B\"{o}nsch, Andrea and Bluhm, Alexander R. and Ehret, Jonathan and Kuhlen, Torsten W.},
title = {Inferring a User's Intent on Joining or Passing by Social Groups},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423862},
doi = {10.1145/3383652.3423862},
abstract = {Modeling the interactions between users and social groups of virtual agents (VAs) is vital in many virtual-reality-based applications. However, only little research on group encounters has been conducted yet. We intend to close this gap by focusing on the distinction between joining and passing-by a group. To enhance the interactive capacity of VAs in these situations, knowing the user's objective is required to show reasonable reactions. To this end, we propose a classification scheme which infers the user's intent based on social cues such as proxemics, gazing and orientation, followed by triggering believable, non-verbal actions on the VAs. We tested our approach in a pilot study with overall promising results and discuss possible improvements for further studies.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {10},
numpages = {8},
keywords = {virtual agents, joining a group, social groups, virtual reality},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}





Evaluating the Influence of Phoneme-Dependent Dynamic Speaker Directivity of Embodied Conversational Agents’ Speech


Jonathan Ehret, Jonas Stienen, Chris Brozdowski, Andrea Bönsch, Irene Mittelberg, Michael Vorländer, Torsten Wolfgang Kuhlen
20th ACM International Conference on Intelligent Virtual Agents 2020 (IVA'20)
pubimg

Generating natural embodied conversational agents within virtual spaces crucially depends on speech sounds and their directionality. In this work, we simulated directional filters to not only add directionality, but also directionally adapt each phoneme. We therefore mimic reality where changing mouth shapes have an influence on the directional propagation of sound. We conducted a study (n = 32) evaluating naturalism ratings, preference and distinguishability of omnidirectional speech auralization compared to static and dynamic, phoneme-dependent directivities. The results indicated that participants cannot distinguish dynamic from static directivity. Furthermore, participants’ preference ratings aligned with their naturalism ratings. There was no unanimity, however, with regards to which auralization is the most natural.

» Show Videos
» Show BibTeX

@inproceedings{10.1145/3383652.3423863,
author = {Ehret, Jonathan and Stienen, Jonas and Brozdowski, Chris and B\"{o}nsch, Andrea and Mittelberg, Irene and Vorl\"{a}nder, Michael and Kuhlen, Torsten W.},
title = {Evaluating the Influence of Phoneme-Dependent Dynamic Speaker Directivity of Embodied Conversational Agents' Speech},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423863},
doi = {10.1145/3383652.3423863},
abstract = {Generating natural embodied conversational agents within virtual spaces crucially depends on speech sounds and their directionality. In this work, we simulated directional filters to not only add directionality, but also directionally adapt each phoneme. We therefore mimic reality where changing mouth shapes have an influence on the directional propagation of sound. We conducted a study (n = 32) evaluating naturalism ratings, preference and distinguishability of omnidirectional speech auralization compared to static and dynamic, phoneme-dependent directivities. The results indicated that participants cannot distinguish dynamic from static directivity. Furthermore, participants' preference ratings aligned with their naturalism ratings. There was no unanimity, however, with regards to which auralization is the most natural.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {17},
numpages = {8},
keywords = {phoneme-dependent directivity, directional 3D sound, speech, embodied conversational agents, virtual acoustics},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}





The Impact of a Virtual Agent’s Non-Verbal Emotional Expression on a User’s Personal Space Preferences


Andrea Bönsch, Sina Radke, Jonathan Ehret, Ute Habel, Torsten Wolfgang Kuhlen
20th ACM International Conference on Intelligent Virtual Agents 2020 (IVA'20)
pubimg

Virtual-reality-based interactions with virtual agents (VAs) are likely subject to similar influences as human-human interactions. In either real or virtual social interactions, interactants try to maintain their personal space (PS), an ubiquitous, situative, flexible safety zone. Building upon larger PS preferences to humans and VAs with angry facial expressions, we extend the investigations to whole-body emotional expressions. In two immersive settings–HMD and CAVE–66 males were approached by an either happy, angry, or neutral male VA. Subjects preferred a larger PS to the angry VA when being able to stop him at their convenience (Sample task), replicating previous findings, and when being able to actively avoid him (PassBy task). In the latter task, we also observed larger distances in the CAVE than in the HMD.

» Show Videos
» Show BibTeX

@inproceedings{10.1145/3383652.3423888,
author = {B\"{o}nsch, Andrea and Radke, Sina and Ehret, Jonathan and Habel, Ute and Kuhlen, Torsten W.},
title = {The Impact of a Virtual Agent's Non-Verbal Emotional Expression on a User's Personal Space Preferences},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423888},
doi = {10.1145/3383652.3423888},
abstract = {Virtual-reality-based interactions with virtual agents (VAs) are likely subject to similar influences as human-human interactions. In either real or virtual social interactions, interactants try to maintain their personal space (PS), an ubiquitous, situative, flexible safety zone. Building upon larger PS preferences to humans and VAs with angry facial expressions, we extend the investigations to whole-body emotional expressions. In two immersive settings-HMD and CAVE-66 males were approached by an either happy, angry, or neutral male VA. Subjects preferred a larger PS to the angry VA when being able to stop him at their convenience (Sample task), replicating previous findings, and when being able to actively avoid him (Pass By task). In the latter task, we also observed larger distances in the CAVE than in the HMD.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {12},
numpages = {8},
keywords = {personal space, virtual reality, emotions, virtual agents},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}





The 19 Unifying Questionnaire Constructs of Artificial Social Agents: An IVA Community Analysis


Siska Fitrianie, Merijn Bruijnes, Deborah Richards, Andrea Bönsch, Willem-Paul Brinkman
20th ACM International Conference on Intelligent Virtual Agents 2020 (IVA'20)
pubimg

In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 80 researchers worldwide, researching the IVA community interests and practises in evaluating human interaction with an artificial social agent (ASA). The effort is driven by previous IVA workshops and plenary IVA discussions related to the methodological crisis on the evaluation of ASAs. A previous literature review showed a continuous practise of creating new questionnaires instead of reusing validated questionnaires. We address this issue by examining questionnaire measurement constructs used in empirical studies between 2013 to 2018 published in the IVA conference. We identified 189 constructs used in 89 questionnaires that are reported across 81 studies. Although these constructs have different names, they often measure the same thing. In this paper, we, therefore, present a unifying set of 19 constructs that captures more than 80% of the 189 constructs initially identified. We established this set in two steps. First, 49 researchers classified the constructs in broad theoretically based categories. Next, 23 researchers grouped the constructs in each category on their similarity. The resulting 19 groups form a unifying set of constructs, which will be the basis for the future questionnaire instrument of human-ASA interaction.

Nominated for the Best Paper Award.

» Show Videos
» Show BibTeX

@inproceedings{10.1145/3383652.3423873,
author = {Fitrianie, Siska and Bruijnes, Merijn and Richards, Deborah and B\"{o}nsch, Andrea and Brinkman, Willem-Paul},
title = {The 19 Unifying Questionnaire Constructs of Artificial Social Agents: An IVA Community Analysis},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423873},
doi = {10.1145/3383652.3423873},
abstract = {In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 80 researchers worldwide, researching the IVA community interests and practises in evaluating human interaction with an artificial social agent (ASA). The effort is driven by previous IVA workshops and plenary IVA discussions related to the methodological crisis on the evaluation of ASAs. A previous literature review showed a continuous practise of creating new questionnaires instead of reusing validated questionnaires. We address this issue by examining questionnaire measurement constructs used in empirical studies between 2013 to 2018 published in the IVA conference. We identified 189 constructs used in 89 questionnaires that are reported across 81 studies. Although these constructs have different names, they often measure the same thing. In this paper, we, therefore, present a unifying set of 19 constructs that captures more than 80% of the 189 constructs initially identified. We established this set in two steps. First, 49 researchers classified the constructs in broad theoretically based categories. Next, 23 researchers grouped the constructs in each category on their similarity. The resulting 19 groups form a unifying set of constructs, which will be the basis for the future questionnaire instrument of human-ASA interaction.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {21},
numpages = {8},
keywords = {evaluation instrument, user study, Artificial social agent, questionnaire, measurement construct},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}





Immersive Sketching to Author Crowd Movements in Real-time


Andrea Bönsch, Sebastian J. Barton, Jonathan Ehret, Torsten Wolfgang Kuhlen
20th ACM International Conference on Intelligent Virtual Agents 2020 (IVA'20)
pubimg

the flow of virtual crowds in a direct and interactive manner. Here, options to redirect a flow by sketching barriers, or guiding entities based on a sketched network of connected sections are provided. As virtual crowds are increasingly often embedded into virtual reality (VR) applications, 3D authoring is of interest.

In this preliminary work, we thus present a sketch-based approach for VR. First promising results of a proof-of-concept are summarized and improvement suggestions, extensions, and future steps are discussed.

» Show Videos
» Show BibTeX

@inproceedings{10.1145/3383652.3423883,
author = {B\"{o}nsch, Andrea and Barton, Sebastian J. and Ehret, Jonathan and Kuhlen, Torsten W.},
title = {Immersive Sketching to Author Crowd Movements in Real-Time},
year = {2020},
isbn = {9781450375863},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383652.3423883},
doi = {10.1145/3383652.3423883},
abstract = {Sketch-based interfaces in 2D screen space allow to efficiently author the flow of virtual crowds in a direct and interactive manner. Here, options to redirect a flow by sketching barriers, or guiding entities based on a sketched network of connected sections are provided.As virtual crowds are increasingly often embedded into virtual reality (VR) applications, 3D authoring is of interest. In this preliminary work, we thus present a sketch-based approach for VR. First promising results of a proof-of-concept are summarized and improvement suggestions, extensions, and future steps are discussed.},
booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
articleno = {11},
numpages = {3},
keywords = {virtual crowds, virtual reality, sketch-based interface, authoring},
location = {Virtual Event, Scotland, UK},
series = {IVA '20}
}





Joint Dual-Tasking in VR: Outlining the Behavioral Design of Interactive Human Companions Who Walk and Talk with a User


Andrea Bönsch, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2020
pubimg

To resemble realistic and lively places, virtual environments are increasingly often enriched by virtual populations consisting of computer-controlled, human-like virtual agents. While the applications often provide limited user-agent interaction based on, e.g., collision avoidance or mutual gaze, complex user-agent dynamics such as joint locomotion combined with a secondary task, e.g., conversing, are rarely considered yet. These dual-tasking situations, however, are beneficial for various use-cases: guided tours and social simulations will become more realistic and engaging if a user is able to traverse a scene as a member of a social group, while platforms to study crowd and walking behavior will become more powerful and informative. To this end, this presentation deals with different areas of interaction dynamics, which need to be combined for modeling dual-tasking with virtual agents. Areas covered are kinematic parameters for the navigation behavior, group shapes in static and mobile situations as well as verbal and non-verbal behavior for conversations.

» Show Videos
» Show BibTeX

@InProceedings{Boensch2020a,
author = {Andrea B\"{o}nsch and Torsten W. Kuhlen},
title = {{Joint Dual-Tasking in VR: Outlining the Behavioral Design of Interactive Human Companions Who Walk and Talk with a User}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2020},
month={March}
}





Towards a Graphical User Interface for Exploring and Fine-Tuning Crowd Simulations


Andrea Bönsch, Marcel Jonda, Jonathan Ehret, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2020
pubimg

Simulating a realistic navigation of virtual pedestrians through virtual environments is a recurring subject of investigations. The various mathematical approaches used to compute the pedestrians’ paths result, i.a., in different computation-times and varying path characteristics. Customizable parameters, e.g., maximal walking speed or minimal interpersonal distance, add another level of complexity. Thus, choosing the best-fitting approach for a given environment and use-case is non-trivial, especially for novice users.

To facilitate the informed choice of a specific algorithm with a certain parameter set, crowd simulation frameworks such as Menge provide an extendable collection of approaches with a unified interface for usage. However, they often miss an elaborated visualization with high informative value accompanied by visual analysis methods to explore the complete simulation data in more detail – which is yet required for an informed choice. Benchmarking suites such as SteerBench are a helpful approach as they objectively analyze crowd simulations, however they are too tailored to specific behavior details. To this end, we propose a preliminary design of an advanced graphical user interface providing a 2D and 3D visualization of the crowd simulation data as well as features for time navigation and an overall data exploration.

» Show Videos
» Show BibTeX

@InProceedings{Boensch2020b,
author = {Andrea B\"{o}nsch and Marcel Jonda and Jonathan Ehret and Torsten W. Kuhlen},
title = {{Towards a Graphical User Interface for Exploring and Fine-Tuning Crowd Simulations}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2020},
month={March}
}





Talk: Proximity in Social VR - Interpersonal Distance between a User and Virtual Agents


Andrea Bönsch
3rd Workshop on "Person-to-Person Interaction: From Analysis to Applications", 2020

Proxemic is a well known social behavioral measure, where the interpersonal distance between interactans is evaluated - either in real or in virtual social encounters. Given the prominent role of emotional expressions in our everyday social interactions, we investigated how emotions of a virtual agent affect proxemic adaptions while taking the aspects spatial constellation between user and agent as well as user’s level of dynamics into account.



Influence of Directivity on the Perception of Embodied Conversational Agents' Speech


Jonathan Wendt, Benjamin Weyers, Jonas Stienen, Andrea Bönsch, Michael Vorländer, Torsten Wolfgang Kuhlen
19th ACM International Conference on Intelligent Virtual Agents (IVA), 2019
pubimg

Embodied conversational agents become more and more important in various virtual reality applications, e.g., as peers, trainers or therapists. Besides their appearance and behavior, appropriate speech is required for them to be perceived as human-like and realistic. Additionally to the used voice signal, also its auralization in the immersive virtual environment has to be believable. Therefore, we investigated the effect of adding directivity to the speech sound source. Directivity simulates the orientation dependent auralization with regard to the agent's head orientation. We performed a one-factorial user study with two levels (n=35) to investigate the effect directivity has on the perceived social presence and realism of the agent's voice. Our results do not indicate any significant effects regarding directivity on both variables covered. We account this partly to an overall too low realism of the virtual agent, a not overly social utilized scenario and generally high variance of the examined measures. These results are critically discussed and potential further research questions and study designs are identified.

» Show Videos
» Show BibTeX

@inproceedings{Wendt2019,
author = {Wendt, Jonathan and Weyers, Benjamin and Stienen, Jonas and B\"{o}nsch, Andrea and Vorl\"{a}nder, Michael and Kuhlen, Torsten W.},
title = {Influence of Directivity on the Perception of Embodied Conversational Agents' Speech},
booktitle = {Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents},
series = {IVA '19},
year = {2019},
isbn = {978-1-4503-6672-4},
location = {Paris, France},
pages = {130--132},
numpages = {3},
url = {http://doi.acm.org/10.1145/3308532.3329434},
doi = {10.1145/3308532.3329434},
acmid = {3329434},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {directional 3d sound, social presence, virtual acoustics, virtual agents},
}





Evaluation of Omnipresent Virtual Agents Embedded as Temporarily Required Assistants in Immersive Environments


Andrea Bönsch, Jan Hoffmann, Jonathan Wendt, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2019
pubimg

When designing the behavior of embodied, computer-controlled, human-like virtual agents (VA) serving as temporarily required assistants in virtual reality applications, two linked factors have to be considered: the time the VA is visible in the scene, defined as presence time (PT), and the time till the VA is actually available for support on a user’s calling, defined as approaching time (AT).

Complementing a previous research on behaviors with a low VA’s PT, we present the results of a controlled within-subjects study investigating behaviors by which the VA is always visible, i.e., behaviors with a high PT. The two behaviors affecting the AT tested are: following, a design in which the VA is omnipresent and constantly follows the users, and busy, a design in which theVAis self-reliantly spending time nearby the users and approaches them only if explicitly asked for. The results indicate that subjects prefer the following VA, a behavior which also leads to slightly lower execution times compared to busy.

» Show Videos
» Show BibTeX

@InProceedings{Boensch2019c,
author = {Andrea B\"{o}nsch and Jan Hoffmann and Jonathan Wendt and Torsten W. Kuhlen},
title = {{Evaluation of Omnipresent Virtual Agents Embedded as Temporarily Required Assistants in Immersive Environments}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2019},
doi={10.1109/VHCIE.2019.8714726},
month={March}
}





An Empirical Lab Study Investigating If Higher Levels of Immersion Increase the Willingness to Donate


Andrea Bönsch, Alexander Kies, Moritz Jörling, Stefanie Paluch, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2019
pubimg

Technological innovations have a growing relevance for charitable donations, as new technologies shape the way we perceive and approach digital media. In a between-subjects study with sixty-one volunteers, we investigated whether a higher degree of immersion for the potential donor can yield more donations for non-governmental organizations. Therefore, we compared the donations given after experiencing a video-based, an augmented-reality-based, or a virtual-reality-based scenery with a virtual agent, representing a war victimized Syrian boy talking about his losses. Our initial results indicate that the immersion has no impact. However, the donor’s perceived innovativeness of the used technology might be an influencing factor.

» Show Videos
» Show BibTeX

@InProceedings{Boensch2019b,
author = {Andrea B\"{o}nsch and Alexander Kies and Moritz Jörling and Stefanie Paluch and Torsten W. Kuhlen},
title = {{An Empirical Lab Study Investigating If Higher Levels of Immersion Increase the Willingness to Donatee}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2019}
pages={1-4},
doi={10.1109/VHCIE.2019.8714622},
month={March}
}





Volumetric Video Capture Using Unsynchronized, Low-Cost Cameras


Andrea Bönsch, Andrew Feng, Parth Patel, Ari Shapiro
14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019)
pubimg

Volumetric video can be used in virtual and augmented reality applications to show detailed animated performances by human actors. In this paper, we describe a volumetric capture system based on a photogrammetry cage with unsynchronized, low-cost cameras which is able to generate high-quality geometric data for animated avatars. This approach requires, inter alia, a subsequent synchronization of the captured videos.




» Show Videos
» Show BibTeX

@Article{Boensch2019a,
author = {Andrea Bönsch, Andrew Feng, Parth Patel and Ari Shapiro},
title = {{Volumetric Video Capture using Unsynchronized, Low-cost Cameras}},
booktitle= {Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019)},
year = {2019},
volume = {1},
pages = {255--261}
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007373202550261},
isbn={978-989-758-354-4}
}





Joint Locomotion with Virtual Agents in Immersive Environments


Andrea Bönsch
Doctoral Consortium at IEEE Virtual Reality Conference 2019
pubimg

Many applications in the realm of social virtual reality require reasonable locomotion patterns for their embedded, intelligent virtual agents (VAs). The two main research areas covered in the literature are pure inter-agent-dynamics for crowd simulations and user-agent-dynamics in, e.g., pedestrian scenarios. However, social locomotion, defined as a joint locomotion of a social group consisting of a human user and one to several VAs in the role of accompanying interaction partners, has not been carefully investigated yet. I intend to close this gap by contributing locomotion models for the social group’s VAs. Thereby, I plan to evaluate the effects of the VAs’ locomotion patterns on a user’s perceived degree of immersion, comfort, and social presence.

» Show BibTeX

@InProceedings{Boensch2019d,
author = {Andrea B\"{o}nsch},
title = {Locomotion with Virtual Agents in the Realm of Social Virtual Reality},
booktitle = {Doctoral Consortium at IEEE Virtual Reality Conference 2018},
year = {2019}
}





Virtual Humans as Co-Workers: A Novel Methodology to Study Peer Effects


Özgür Gürerk, Andrea Bönsch, Thomas Kittsteiner, Andreas Staffeldt
Journal of Behavioral and Experimental Economics
pubimg

We introduce a novel methodology to study peer effects. Using virtual reality technology, we create a naturalistic work setting in an immersive virtual environment where we embed a computer-generated virtual human as the co-worker of a human subject, both performing a sorting task at a conveyor belt. In our setup, subjects observe the virtual peer, while the virtual human is not observing them. In two treatments, human subjects observe either a low productive or a high productive virtual peer. We find that human subjects rate their presence feeling of \being there" in the immersive virtual environment as natural. Subjects also recognize that virtual peers in our two treatments showed different productivities. We do not find a general treatment effect on productivity. However, we find that competitive subjects display higher performance when they are in the presence of a highly productive peer - compared to when they observe a low productive peer. We use tracking data to learn about the subjects' body movements. Analyzing hand and head data, we show that competitive subjects are more careful in the sorting task than non-competitive subjects. We also discuss some methodological issues related to virtual reality experiments.

» Show BibTeX

@article{gurerk2018,
title={{Virtual Humans as Co-Workers: A Novel Methodology to Study Peer Effects}},
author={G{\"u}rerk, Ozg{\"u}r and B{\"o}nsch, Andrea and Kittsteiner, Thomas and Staffeldt, Andreas},
year={2018},
journal = {{Journal of Behavioral and Experimental Economics}},
issn = {2214-8043},
doi = {https://doi.org/10.1016/j.socec.2018.11.003},
url = {http://www.sciencedirect.com/science/article/pii/S221480431830140X}
}





Social VR: How Personal Space is Affected by Virtual Agents’ Emotions


Andrea Bönsch, Sina Radke, Heiko Overath, Laura Marie Aschè, Jonathan Wendt, Tom Vierjahn, Ute Habel, Torsten Wolfgang Kuhlen
Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2018
pubimg

Personal space (PS), the flexible protective zone maintained around oneself, is a key element of everyday social interactions. It, e.g., affects people's interpersonal distance and is thus largely involved when navigating through social environments. However, the PS is regulated dynamically, its size depends on numerous social and personal characteristics and its violation evokes different levels of discomfort and physiological arousal. Thus, gaining more insight into this phenomenon is important.

We contribute to the PS investigations by presenting the results of a controlled experiment in a CAVE, focusing on German males in the age of 18 to 30 years. The PS preferences of 27 participants have been sampled while they were approached by either a single embodied, computer-controlled virtual agent (VA) or by a group of three VAs. In order to investigate the influence of a VA's emotions, we altered their facial expression between angry and happy. Our results indicate that the emotion as well as the number of VAs approaching influence the PS: larger distances are chosen to angry VAs compared to happy ones; single VAs are allowed closer compared to the group. Thus, our study is a foundation for social and behavioral studies investigating PS preferences.

» Show BibTeX

@InProceedings{Boensch2018c,
author = {Andrea B\"{o}nsch and Sina Radke and Heiko Overath and Laura M. Asch\'{e} and Jonathan Wendt and Tom Vierjahn and Ute Habel and Torsten W. Kuhlen},
title = {{Social VR: How Personal Space is Affected by Virtual Agents’ Emotions}},
booktitle = {Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR) 2018},
year = {2018}
}





Does the Directivity of a Virtual Agent’s Speech Influence the Perceived Social Presence?


Jonathan Wendt, Benjamin Weyers, Andrea Bönsch, Jonas Stienen, Tom Vierjahn, Michael Vorländer, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2018
pubimg

When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralizations of their speech. In this work-in progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.


» Show Videos
» Show BibTeX

@InProceedings{Boensch2018b,
author = {Jonathan Wendt and Benjamin Weyers and Andrea B\"{o}nsch and Jonas Stienen and Tom Vierjahn and Michael Vorländer and Torsten W. Kuhlen },
title = {{Does the Directivity of a Virtual Agent’s Speech Influence the Perceived Social Presence?}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2018}
}





Towards Understanding the Influence of a Virtual Agent’s Emotional Expression on Personal Space


Andrea Bönsch, Sina Radke, Jonathan Wendt, Tom Vierjahn, Ute Habel, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2018
pubimg

The concept of personal space is a key element of social interactions. As such, it is a recurring subject of investigations in the context of research on proxemics. Using virtual-reality-based experiments, we contribute to this area by evaluating the direct effects of emotional expressions of an approaching virtual agent on an individual’s behavioral and physiological responses. As a pilot study focusing on the emotion expressed solely by facial expressions gave promising results, we now present a study design to gain more insight.

» Show BibTeX

@InProceedings{Boensch2018b,
author = {Andrea B\"{o}nsch and Sina Radke and Jonathan Wendt and Tom Vierjahn and Ute Habel and Torsten W. Kuhlen},
title = {{Towards Understanding the Influence of a Virtual Agent’s Emotional Expression on Personal Space}},
booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE)},
year = {2018}
}





Talk: Influence of Emotions on Personal Space Preferences


Andrea Bönsch, Sina Radke, Heiko Overath, Laura Marie Aschè, Jonathan Wendt, Tom Vierjahn, Ute Habel, Torsten Wolfgang Kuhlen
Virtual Environments: Current Topics in Psychological Research (VECTOR) workshop, 2018

Personal Space (PS) is regulated dynamically by choosing an appropriate interpersonal distance when navigating through social environments. This key element in social interactions is influenced by numerous social and personal characteristics, e.g., the nature of the relationship between the interaction partners and the other’s sex and age. Moreover, affective contexts and expressions of interaction partners influence PS preferences, evident, e.g., in larger distances to others in threatening situations or when confronted with angry-looking individuals. Given the prominent role of emotional expressions in our everyday social interactions, we investigate how emotions affect PS adaptions.




Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments


Andrea Bönsch, Tom Vierjahn, Torsten Wolfgang Kuhlen
Proceedings of the IEEE Symposium on 3D User Interfaces 2017
pubimg

Embodied, virtual agents provide users assistance in agent-based support systems. To this end, two closely linked factors have to be considered for the agents’ behavioral design: their presence time (PT), i.e., the time in which the agents are visible, and the approaching time (AT), i.e., the time span between the user’s calling for an agent and the agent’s actual availability.

This work focuses on human-like assistants that are embedded in immersive scenes but that are required only temporarily. To the best of our knowledge, guidelines for a suitable trade-off between PT and AT of these assistants do not yet exist. We address this gap by presenting the results of a controlled within-subjects study in a CAVE. While keeping a low PT so that the agent is not perceived as annoying, three strategies affecting the AT, namely fading, walking, and running, are evaluated by 40 subjects. The results indicate no clear preference for either behavior. Instead, the necessity of a better trade-off between a low AT and an agent’s realistic behavior is demonstrated.

» Show BibTeX

@InProceedings{Boensch2017b,
Title = {Evaluation of Approaching-Strategies of Temporarily Required Virtual Assistants in Immersive Environments},
Author = {Andrea B\"{o}nsch and Tom Vierjahn and Torsten W. Kuhlen},
Booktitle = {IEEE Symposium on 3D User Interfaces},
Year = {2017},
Pages = {69-72}
}





Turning Anonymous Members of a Multiagent System into Individuals


Andrea Bönsch, Tom Vierjahn, Ari Shapiro, Torsten Wolfgang Kuhlen
IEEE Virtual Humans and Crowds for Immersive Environments (VHCIE), 2017
pubimg

It is increasingly common to embed embodied, human-like, virtual agents into immersive virtual environments for either of the two use cases: (1) populating architectural scenes as anonymous members of a crowd and (2) meeting or supporting users as individual, intelligent and conversational agents. However, the new trend towards intelligent cyber physical systems inherently combines both use cases. Thus, we argue for the necessity of multiagent systems consisting of anonymous and autonomous agents, who temporarily turn into intelligent individuals. Besides purely enlivening the scene, each agent can thus be engaged into a situation-dependent interaction by the user, e.g., into a conversation or a joint task. To this end, we devise components for an agent’s behavioral design modeling the transition between an anonymous and an individual agent when a user approaches.

» Show BibTeX

@InProceedings{Boensch2017c,
Title = {{Turning Anonymous Members of a Multiagent System into Individuals}},
Author = {Andrea B\"{o}nsch, Tom Vierjahn, Ari Shapiro and Torsten W. Kuhlen},
Booktitle = {IEEE Virtual Humans and Crowds for Immersive Environments},
Year = {2017},
Keywords = {Virtual Humans; Virtual Reality; Intelligent Agents; Mutliagent System},
DOI ={ 10.1109/VHCIE.2017.7935620}
Owner = {ab280112},
Timestamp = {2017.02.28}
}





Poster: Score-Based Recommendation for Efficiently Selecting Individual Virtual Agents in Multi-Agent Systems


Andrea Bönsch, Robert Trisnadi, Jonathan Wendt, Tom Vierjahn, Torsten Wolfgang Kuhlen
Proceedings of 23rd ACM Symposium on Virtual Reality Software and Technology (VRST) 2017
pubimg

Controlling user-agent-interactions by means of an external operator includes selecting the virtual interaction partners fast and faultlessly. However, especially in immersive scenes with a large number of potential partners, this task is non-trivial.

Thus, we present a score-based recommendation system supporting an operator in the selection task. Agents are recommended as potential partners based on two parameters: the user’s distance to the agents and the user’s gazing direction. An additional graphical user interface (GUI) provides elements for configuring the system and for applying actions to those agents which the operator has confirmed as interaction partners.

» Show BibTeX

@InProceedings{Boensch2017d,
Title = {Score-Based Recommendation for Efficiently Selecting Individual
Virtual Agents in Multi-Agent Systems},
Author = {Andrea Bönsch and Robert Trisnadi and Jonathan Wendt and Tom Vierjahn, and Torsten
W. Kuhlen},
Booktitle = {Proceedings of 23rd ACM
Symposium on Virtual Reality Software and Technology},
Year = {2017},
Pages = {tba},
DOI={10.1145/3139131.3141215}
}





Poster: Peers At Work: Economic Real-Effort Experiments In The Presence of Virtual Co-Workers


Andrea Bönsch, Jonathan Wendt, Heiko Overath, Özgür Gürerk, Christine Harbring, Christian Grund, Thomas Kittsteiner, Torsten Wolfgang Kuhlen
Proceedings of IEEE Virtual Reality Conference 2017
pubimg

Traditionally, experimental economics uses controlled and incentivized field and lab experiments to analyze economic behavior. However, investigating peer effects in the classic settings is challenging due to the reflection problem: Who is influencing whom?

To overcome this, we enlarge the methodological toolbox of these experiments by means of Virtual Reality. After introducing and validating a real-effort sorting task, we embed a virtual agent as peer of a human subject, who independently performs an identical sorting task. We conducted two experiments investigating (a) the subject’s productivity adjustment due to peer effects and (b) the incentive effects on competition. Our results indicate a great potential for Virtual-Reality-based economic experiments.

» Show BibTeX

@InProceedings{Boensch2017a,
Title = {Peers At Work: Economic Real-Effort Experiments In The Presence of Virtual Co-Workers},
Author = {Andrea B\"{o}nsch and Jonathan Wendt and Heiko Overath and Özgür Gürerk and Christine Harbring and Christian Grund and Thomas Kittsteiner and Torsten W. Kuhlen},
Booktitle = {IEEE Virtual Reality Conference Poster Proceedings},
Year = {2017},
Pages = {301-302},
DOI = {10.1109/VR.2017.7892296}
}





Do Not Invade: A Virtual-Reality-Framework to Study Personal Space


Jan Schnathmeier, Heiko Overath, Sina Radke, Andrea Bönsch, Ute Habel, Torsten Wolfgang Kuhlen
Virtuelle und Erweiterte Realität, 14. Workshop der GI-Fachgruppe VR/AR (2017)

The bachelor thesis’ aim was to develop a framework allowing to design and conduct virtual-reality-based user studies gaining insight into the concept of personal space.

» Show BibTeX

@Article{Schnathmeier2017,
Title = {Do Not Invade: A Virtual-Reality-Framework to Study Personal Space},
Author = {Jan Schnathmeier and Heiko Overath and Sina Radke and Andrea B\"{o}nsch and Ute Habel and Torsten W. Kuhlen},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 14. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2017},
Pages = {203-204},
ISBN = {978-3-8440-5606-8}
Publisher = {Shaker Verlag}
}





Collision Avoidance in the Presence of a Virtual Agent in Small-Scale Virtual Environments


Andrea Bönsch, Benjamin Weyers, Jonathan Wendt, Sebastian Freitag, Torsten Wolfgang Kuhlen
Proceedings of the IEEE Symposium on 3D User Interfaces (2016)
pubimg

Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet.

Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.

Honorable Mention for Best Technote!

» Show Videos
» Show BibTeX

@InProceedings{Boensch2016a,
Title = {Collision Avoidance in the Presence of a Virtual Agent in Small-Scale Virtual Environments},
Author = {Andrea B\"{o}nsch and Benjamin Weyers and Jonathan Wendt and Sebastian Freitag and Torsten W. Kuhlen},
Booktitle = {IEEE Symposium on 3D User Interfaces},
Year = {2016},
Pages = {145-148},

Abstract = {Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain
their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled
user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Theirwaywas blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion.
Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.}
}





Poster: Evaluating Presence Strategies of Temporarily Required Virtual Assistants


Andrea Bönsch, Tom Vierjahn, Torsten Wolfgang Kuhlen
16th International Conference on Intelligent Virtual Agents (IVA), 2016
pubimg

Computer-controlled virtual humans can serve as assistants in virtual scenes. Here, they are usually in an almost constant contact with the user. Nonetheless, in some applications assistants are required only temporarily. Consequently, presenting them only when needed, i.e, minimizing their presence time, might be advisable.

To the best of our knowledge, there do not yet exist any design guidelines for such agent-based support systems. Thus, we plan to close this gap by a controlled qualitative and quantitative user study in a CAVE-like environment.We expect users to prefer assistants with a low presence time as well as a low fallback time to get quick support. However, as both factors are linked, a suitable trade-off needs to be found. Thus, we plan to test four different strategies, namely fading, moving, omnipresent and busy. This work presents our hypotheses and our planned within-subject design.

» Show BibTeX

@InBook{Boensch2016c,
Title = {Evaluating Presence Strategies of Temporarily Required Virtual Assistants},
Author = {Andrea B\"{o}nsch and Tom Vierjahn and Torsten W. Kuhlen},
Pages = {387 - 391},
Publisher = {Springer International Publishing},
Year = {2016},
Month = {September},

Abstract = {Computer-controlled virtual humans can serve as assistants in virtual scenes. Here, they are usually in an almost constant contact with the user. Nonetheless, in some applications assistants are required only
temporarily. Consequently, presenting them only when needed, i.e., minimizing their presence time, might be advisable.
To the best of our knowledge, there do not yet exist any design guidelines for such agent-based support systems. Thus, we plan to close this gap by a controlled qualitative and quantitative user study in a CAVE-like environment. We expect users to prefer assistants with a low presence time as well as a low fallback time to get quick support. However, as both factors are linked, a suitable trade-off needs to be found. Thus, we p lan to test four different strategies, namely fading, moving, omnipresent and busy. This work presents our hypotheses and our planned within-subject design.},
Booktitle = {Intelligent Virtual Agents: 16th International Conference, IVA 2016. Proceedings},
Doi = {10.1007/978-3-319-47665-0_39},
Keywords = {Virtual agent, Assistive technology, Immersive virtual environments, User study design},
Owner = {ab280112},
Timestamp = {2016.08.01},
Url = {http://dx.doi.org/10.1007/978-3-319-47665-0_39}
}





Poster: Automatic Generation of World in Miniatures for Realistic Architectural Immersive Virtual Environments


Andrea Bönsch, Sebastian Freitag, Torsten Wolfgang Kuhlen
Proceedings of IEEE Virtual Reality Conference (2016)
pubimg

Orientation and wayfinding in architectural Immersive Virtual Environments (IVEs) are non-trivial, accompanying tasks which generally support the users’ main task. World in Miniatures (WIMs)— essentially 3D maps containing a scene replica—are an established approach to gain survey knowledge about the virtual world, as well as information about the user’s relation to it. However, for largescale, information-rich scenes, scaling and occlusion issues result in diminishing returns. Since there typically is a lack of standardized information regarding scene decompositions, presenting the inside of self-contained scene extracts is challenging.

Therefore, we present an automatic WIM generation workflow for arbitrary, realistic in- and outdoor IVEs in order to support users with meaningfully selected and scaled extracts of the IVE as well as corresponding context information. Additionally, a 3D user interface is provided to manually manipulate the represented extract.

» Show Videos
» Show BibTeX

@InProceedings{Boensch2016b,
Title = {Automatic Generation of World in Miniatures for Realistic Architectural Immersive Virtual Environments},
Author = {Andrea B\"{o}nsch and Sebastian Freitag and Torsten W. Kuhlen},
Booktitle = {IEEE Virtual Reality Conference Poster Proceedings},
Year = {2016},
Pages = {155-156},

Abstract = {Orientation and wayfinding in architectural Immersive Virtual Environments (IVEs) are non-trivial, accompanying tasks which generally support the users’ main task. World in Miniatures (WIMs)—essentially 3D maps containing a scene replica—are an established approach to gain survey knowledge about the virtual world, as well as information about the user’s relation to it. However, for largescale, information-rich scenes, scaling and occlusion issues result in diminishing returns. Since there typically is a lack of standardized information regarding scene decompositions, presenting the inside of self-contained scene extracts is challenging.
Therefore, we present an automatic WIM generation workflow for arbitrary, realistic in- and outdoor IVEs in order to support users with meaningfully selected and scaled extracts of the IVE as well as corresponding context information. Additionally, a 3D user interface is provided to manually manipulate the represented extract.}
}





Avatars as Peers at Work: An Experimental Study in Virtual Reality


Özgür Gürerk, Thomas Kittsteiner, Andrea Bönsch, Andreas Staffeldt
Social Science Research Network (SSRN) eLibrary
pubimg

Identification of peer effects is often complicated by the reflection problem: Does agent i influence agent j, or vice versa? To be able to identify a clear causality, we embed a virtual human (avatar) as co-worker of a human subject into an immersive virtual environment. We observe that low productive human subjects increase their work performance more when they observe a low productive avatar – compared to a high productive avatar. This result is in line with the predictions of the social comparison theory, in as much as we observe stronger peer effects when the perceived similarity in abilities between the peers is high.

» Show BibTeX

@Article{Guererk2016,
Title = {Avatars as Peers at Work: An Experimental Study in Virtual Reality},
Author = {Ozgür Gürerk and Thomas Kittsteiner and Andrea Bönsch and Andreas Staffeldt},
Year = {2016},
DOI = {10.2139/ssrn.2842411},
Publisher = {SSRN}
}





Talk: Two Basic Aspects of Virtual Agents’ Behavior: Collision Avoidance and Presence Strategies


Andrea Bönsch, Tom Vierjahn, Torsten Wolfgang Kuhlen
Virtual Environments: Current Topics in Psychological Research (VECTOR) workshop, 2016

Virtual Agents (VAs) are embedded in virtual environments for two reasons: they enliven architectural scenes by representing more realistic situations, and they are dialogue partners. They can function as training partners such as representing students in a teaching scenario, or as assistants by, e.g., guiding users through a scene or by performing certain tasks either individually or in collaboration with the user. However, designing such VAs is challenging as various requirements have to be met. Two relevant factors will be briefly discussed in the talk: Collision Avoidance and Presence Strategies.




Experimental Economics in Virtual Reality


Özgür Gürerk, Andrea Bönsch, Lucas Braun, Christian Grund, Christine Harbring, Thomas Kittsteiner, Andreas Staffeldt
Munich Personal RePEc Archive (2016)
pubimg

Experimental economics uses controlled and incentivized lab and field experiments to learn about economic behavior. By means of three examples, we illustrate how experiments conducted in immersive virtual environments emerge as a new methodological tool that can benefit behavioral economic research.

» Show BibTeX

@TechReport{Gurerk2016,
Title = {{Experimental Economics in Virtual Reality},}
Author = {G{\"u}rerk, {\"O}zg{\"u}r and B{\"o}nsch, Andrea and Braun, Lucas and Grund, Christian and Harbring, Christine and Kittsteiner, Thomas and Staffeldt, Andreas},
Institution = {{M}unich {P}ersonal {R}e{PE}c {A}rchive ({MPRA}), Paper No. 71409},
Year = {2016},
Number = {Paper No. 71409},
Url = {http://mpra.ub.uni-muenchen.de/71409/}
}





Comparison and Evaluation of Viewpoint Quality Estimation Algorithms for Immersive Virtual Environments


Sebastian Freitag, Benjamin Weyers, Andrea Bönsch, Torsten Wolfgang Kuhlen
Proceedings of the 25th International Conference on Artificial Reality and Telexistence and the 20th Eurographics Symposium on Virtual Environments (ICAT-EGVE), 2015
pubimg

The knowledge of which places in a virtual environment are interesting or informative can be used to improve user interfaces and to create virtual tours. Viewpoint Quality Estimation algorithms approximate this information by calculating quality scores for viewpoints. However, even though several such algorithms exist and have also been used, e.g., in virtual tour generation, they have never been comparatively evaluated on virtual scenes. In this work, we introduce three new Viewpoint Quality Estimation algorithms, and compare them against each other and six existing metrics, by applying them to two different virtual scenes. Furthermore, we conducted a user study to obtain a quantitative evaluation of viewpoint quality. The results reveal strengths and limitations of the metrics on actual scenes, and provide recommendations on which algorithms to use for real applications.

» Show BibTeX

@InProceedings{Freitag2015,
Title = {{Comparison and Evaluation of Viewpoint Quality Estimation Algorithms for Immersive Virtual Environments}},
Author = {Freitag, Sebastian and Weyers, Benjamin and B\"{o}nsch, Andrea and Kuhlen, Torsten W.},
Booktitle = {ICAT-EGVE 2015 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
Year = {2015},
Pages = {53-60},
Doi = {10.2312/egve.20151310}
}





MRI Visualisation by Digitally Reconstructed Radiographs


Antoine Serrurier, Andrea Bönsch, Robert Lau, Thomas Deserno
Proc. SPIE 9418 Medical Imaging 2015: PACS and Imaging Informatics: Next Generation and Innovations (2015)
pubimg

Visualising volumetric medical images such as computed tomography and magnetic resonance imaging (MRI) on picture archiving and communication systems (PACS) clients is often achieved by image browsing in sagittal, coronal or axial views or three-dimensional (3D) rendering. This latter technique requires fine thresholding for MRI. On the other hand, computing virtual radiograph images, also referred to as digitally reconstructed radiographs (DRR), provides in a single two-dimensional (2D) image a complete overview of the 3D data. It appears therefore as a powerful alternative for MRI visualisation and preview in PACS. This study describes a method to compute DRR from T1-weighted MRI. After segmentation of the background, a histogram distribution analysis is performed and each foreground MRI voxel is labeled as one of three tissues: cortical bone, also known as principal absorber of the X-rays, muscle and fat. An intensity level is attributed to each voxel according to the Hounsfield scale, linearly related to the X-ray attenuation coefficient. Each DRR pixel is computed as the accumulation of the new intensities of the MRI dataset along the corresponding X-ray. The method has been tested on 16 T1-weighted MRI sets. Anterior-posterior and lateral DRR have been computed with reasonable qualities and avoiding any manual tissue segmentations.

» Show BibTeX

@Article{Serrurier2015,
Title = {{MRI} {V}isualisation by {D}igitally {R}econstructed {R}adiographs},
Author = {Antoine Serrurier and Andrea B\"{o}nsch and Robert Lau and Thomas M. Deserno (n\'{e} Lehmann)},
Journal = {Proceeding of SPIE 9418, Medical Imaging 2015: PACS and Imaging Informatics: Next Generation and Innovations},
Year = {2015},
Pages = {94180I-94180I-7},
Volume = {9418},
Doi = {10.1117/12.2081845},
Url = {http://rasimas.imib.rwth-aachen.de/output_publications.php}
}





Immersive Art: Using a CAVE-like Virtual Environment for the Presentation of Digital Works of Art


Sebastian Pick, Andrea Bönsch, Dennis Scully, Torsten Wolfgang Kuhlen
Virtuelle und Erweiterte Realität, 12. Workshop der GI-Fachgruppe VR/AR (2015)
pubimg

Digital works of art are often created using some kind of modeling software, like Cinema4D. Usually they are presented in a non-interactive form, like large Diasecs, and can thus only be experienced by passive viewing. To explore alternative, more captivating presentation channels, we investigate the use of a CAVE virtual reality (VR) system as an immersive and interactive presentation platform in this paper. To this end, in a collaboration with an artist, we built an interactive VR experience from one of his existing works. We provide details on our design and report on the results of a qualitative user study.

» Show Videos
» Show BibTeX

@Article{Pick2015,
Title = {{Immersive Art: Using a CAVE-like Virtual Environment for the Presentation of Digitial Works of Art}},
Author = {Pick, Sebastian and B\"{o}nsch, Andrea and Scully, Dennis and Kuhlen, Torsten W.},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 12. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2015},
Pages = {10-21},
ISSN = {978-3-8440-3868-2},
Publisher = {Shaker Verlag}
}





Poster: Guided Tour Creation in Immersive Virtual Environments


Sebastian Pick, Andrea Bönsch, Irene Tedjo-Palczynski, Bernd Hentschel, Torsten Wolfgang Kuhlen
IEEE Symposium on 3D User Interfaces (3DUI) (2014)
pubimg

Guided tours have been found to be a good approach to introducing users to previously unknown virtual environments and to allowing them access to relevant points of interest. Two important tasks during the creation of guided tours are the definition of views onto relevant information and their arrangement into an order in which they are to be visited. To allow a maximum of flexibility an interactive approach to these tasks is desirable. To this end, we present and evaluate two approaches to the mentioned interaction tasks in this paper. The first approach is a hybrid 2D/3D interaction metaphor in which a tracked tablet PC is used as a virtual digital camera that allows to specify and order views onto the scene. The second one is a purely 3D version of the first one, which does not require a tablet PC. Both approaches were compared in an initial user study, whose results indicate a superiority of the 3D over the hybrid approach.

» Show BibTeX

@InProceedings{Pick2014,
Title = {{P}oster: {G}uided {T}our {C}reation in {I}mmersive {V}irtual {E}nvironments},
Author = {Sebastian Pick and Andreas B\"{o}nsch and Irene Tedjo-Palczynski and Bernd Hentschel and Torsten Kuhlen},
Booktitle = {IEEE Symposium on 3D User Interfaces (3DUI), 2014},
Year = {2014},
Month = {March},
Pages = {151-152},
Doi = {10.1109/3DUI.2014.6798865},
Url = {http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?arnumber=6798865}
}





CAVIR: Correspondence Analysis in Virtual Reality. Ways to a Valid Interpretation of Correspondence Analytical Point Clouds in Virtual Environments


Frederik Graff, Andrea Bönsch, Daniel Bündgens, Torsten Wolfgang Kuhlen
International Masaryk Conference for Ph.D. Students and Young Researchers (2012)
pubimg

Correspondence Analysis (CA) is frequently used to interpret correlations between categorical variables in the area of market research. To do so, coherences of variables are converted to a three-dimensional point cloud and plotted as three different 2D-mappings. The major challenge is to correctly interpret these plottings. Due to a missing axis, distances can easily be under- or overestimated. This can lead to a misclustering and misinterpretation of data and thus to faulty conclusions. To address this problem we present CAVIR, an approach for CA in Virtual Reality. It supports users with a virtual three-dimensional representation of the point cloud and different options to show additional information, to measure Euclidean distances, and to cluster points. Besides, the free rotation of the entire point cloud enables the CA user to always have a correct view of the data.

» Show BibTeX

@Article{Graff2012,
Title = {{CAVIR}: {C}orrespondence {A}nalysis in {V}irtual {R}eality. {W}ays to a {V}alid {I}nterpretation of {C}orrespondence {A}nalytical {P}oint {C}louds in {V}irtual {E}nvironments},
Author = {Frederik Graff and Andrea B\"{o}nsch and Daniel B\"{u}ndgens and Torsten Kuhlen},
Journal = {{C}onference {P}roceedings: {I}nternational {M}asaryk {C}onference for {P}h.{D}. {S}tudents and {Y}oung {R}esearchers},
Year = {2012},
Pages = {653-662},
Volume = {3},
Url = {http://www.vedeckekonference.cz/library/proceedings/mmk_2012.pdf}
}





CAVIR: Correspondence Analysis in Virtual Reality


Andrea Bönsch, Frederik Graff, Daniel Bündgens, Torsten Wolfgang Kuhlen
Virtuelle und Erweiterte Realität, 9. Workshop der GI-Fachgruppe VR/AR (2012)
pubimg

Correspondence Analysis (CA) is used to interpret correlations between categorical variables in the areas of social science and market research. To do so, coherences of variables are converted to a three-dimensional point cloud and plotted as several different 2D-mappings, each containing two axes. The major challenge is to correctly interpret these plottings. Due to a missing axis, distances can easily be under- or overestimated. This can lead to a misinterpretation and thus a misclustering of data. To address this problem we present CAVIR, an approach for CA in Virtual Reality. It supports users with a three-dimensional representation of the point cloud and different options to show additional information, to measure Euclidean distances, and to cluster points. Besides, the motion parallax and a free rotation of the entire point cloud enable the CA expert to always have a correct view of the data.

Best Presentation Award!

» Show BibTeX

@Article{Boensch2012,
Title = {{CAVIR}: {C}orrespondence {A}nalysis in {V}irtual {R}eality},
Author = {Andrea B\"{o}nsch and Frederik Graff and Daniel B\"{u}ndgens and Torsten Kuhlen},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 9. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2012},
Pages = {49-60},
ISSN = {978-3-8440-1309-2}
Publisher = {Shaker Verlag},
}





Efficiently Navigating Data Sets Using the Hierarchy Browser


Andrea Bönsch, Sebastian Pick, Bernd Hentschel, Torsten Wolfgang Kuhlen
Virtuelle und Erweiterte Realität, 8. Workshop der GI-Fachgruppe VR/AR (2011)
pubimg

A major challenge in Virtual Reality is to enable users to efficiently explore virtual environments, regardless of prior knowledge. This is particularly true for complex virtual scenes containing a huge amount of potential areas of interest. Providing the user convenient access to these areas is of prime importance, just like supporting her to orient herself in the virtual scene. There exist techniques for either aspect, but combining these techniques into one holistic system is not trivial. To address this issue, we present the Hierarchy Browser. It supports the user in creating a mental image of the scene. This is done by offering a well-arranged, hierarchical visual representation of the scene structure as well as interaction techniques to browse it. Additional interaction allows to trigger a scene manipulation, e.g. an automated travel to a desired area of interest. We evaluate the Hierarchy Browser by means of an expert walkthrough.

» Show BibTeX

@Article{Boensch2011,
Title = {{E}fficiently {N}avigating {D}ata {S}ets {U}sing the {H}ierarchy {B}rowser},
Author = {Andrea B\"{o}nsch and Sebastian Pick and Bernd Hentschel and Torsten Kuhlen},
Journal = {{V}irtuelle und {E}rweiterte {R}ealit\"at, 8. {W}orkshop der {GI}-{F}achgruppe {VR}/{AR}},
Year = {2011},
Pages = {37-48},
ISSN = {978-3-8440-0394-9}
Publisher = {Shaker Verlag}
}





Disclaimer Home Visual Computing institute RWTH Aachen University