
Full author list
Krug, K.; Fink, D.; Ellenberg, M.; Reinschluessel, A.; Büschel, W.; Feuchtner, T.; Dachselt, R.
Abstract
This paper investigates avatar self-views in virtual reality (VR) and their impact on multi-user communication scenarios. Self-views in video conferencing have been shown to impact our self-perception and mental load, so we explore whether similar effects occur in VR, as personal and professional gatherings progressively move to virtual spaces with 3D avatars. We identify the following key design dimensions for self-representations in VR: spatiality, anchoring and size, and self-visibility. Based on these, we designed three variants (Remote Perspective View, Personal Mirror, and Miniature Avatar), which we compare to a baseline (No Additional Self-View) in a user study. Our analysis of sixteen dyads playing a word-guessing game requiring verbal and non-verbal communication (i.e., explaining and charades) in VR confirms that self-views are beneficial for communication scenarios that require expressive body language or facial expressions, as this allows monitoring the own avatar’s animations. Further, the Miniature Avatar was overall preferred due to its spatiality, world-anchoring, and full self-visibility. Based on our results, we offer design recommendations for self-views covering the key design dimensions in VR.
Research Article
Accompanying Video
Publications
@inproceedings{krug2025mirrorme,
author = {Katja Krug and Daniel Immanuel Fink and Mats Ole Ellenberg and Anke V. Reinschluessel and Wolfgang B\"{u}schel and Tiare Feuchtner and Raimund Dachselt},
title = {Mirror Me: Exploring Avatar Self-Views in Multi-User Virtual Reality},
journal = {Proceedings of the ACM on Human-Computer Interaction (PACMHCI'25)},
volume = {9},
number = {8},
year = {2025},
month = {12},
numpages = {26},
doi = {https://doi.org/10.1145/3773058},
publisher = {Association for Computing Machinery (ACM) }
}List of additional material
Acknowledgments
This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy: EXC 2117 – 422037984 (Centre for the Advanced Study of Collective Behaviour (CASCB)), as well as EXC 2050/1 – Project ID 390696704 – Cluster of Excellence “Centre for Tactile Internet with Human-in-the-Loop” (CeTI) of Technische Universität Dresden, and by DFG grant 389792660 as part of TRR 248 – CPEC (see https://cpec.science).
The authors acknowledge the financial support by the Federal Ministry of Research, Technology and Space of Germany and by Sächsische Staatsministerium für Wissenschaft, Kultur und Tourismus in the programme Center of Excellence for AI-research „Center for Scalable Data Analytics and Artificial Intelligence Dresden/Leipzig“, project identification number: ScaDS.AI. The authors further acknowledge the financial support by the Federal Ministry of Research, Technology and Space of Germany in the programme of “Souverän. Digital. Vernetzt.”. Joint project 6G-life, project identification number: 16KISK001K. This work was also supported by the Alexander von Humboldt Foundation, funded by the German Federal Ministry of Research, Technology and Space. Lastly, the authors acknowledge the financial support of the Hector Foundation II. We thank Marc Satkowski for his insights and support and Uta Wagner for the inspiring discussions and valuable assistance with data analysis. We further want to thank Paul Rinau for his support in conducting the study.
