Inaugural Lecture by Stefanie Zollmann
Oplysninger om arrangementet
Tidspunkt
Sted
InCuba Store Aud., buliding 5510-103, Aabogade 15, 8200 Aarhus N
Title:
Visual Computing for the Reality-Virtuality Continuum: From Situated Visualisation to Immersive Replay
Abstract:
Loading the game Paperboy from cassette tapes as a child and watching pixels come to life, and later experimenting with electronics at my stepdad's workplace, are experiences that shaped how I think about computation as a visual and spatial medium.
A chance conversation with a friend, who was frustrated by her computer science assignments, unexpectedly redirected my path from physics toward computing. This trajectory crystallised during my early work on projection-based Augmented Reality and smart projectors, where the challenge of aligning digital content with the physical world first became central to my research.
That early fascination with making digital content appear in the right place, at the right time, and in a perceptually convincing way continues to shape my research today. In this inaugural lecture, I present my work along the Reality–Virtuality Continuum, starting from the technical challenges of Augmented Reality, where spatial accuracy, temporal consistency, and perceptual coherence are achieved by grounding graphics and rendering in observed scene structure, appearance, and tracking.
From there, I begin with situated visualisation in AR, where accurate tracking and occlusion-aware rendering are essential for integrating virtual content into real-world environments. These foundations are examined in demanding XR applications in sports and industrial contexts, where real-time constraints, dynamic scenes, and limited observability drive the development of robust tracking and visualisation techniques. Building on these challenges, I extend this work toward view synthesis and rendering for AR and VR, enabling immersive replay and perceptually grounded exploration of captured environments beyond the original viewpoint.
Across this continuum, my work tightly couples computer vision methods for localisation and tracking with computer graphics techniques for rendering, view synthesis, and perceptual integration, supporting immersive systems that allow users to visualise information in the moment and re-experience complex environments over time.
----------------
Everyone is welcome!
There will be refreshments after the lecture.