Gestural Interaction in Ubiquitous Computing Environments

Largely driven by Moore's law and similar exponential developments in storage, networking, displays, cameras etc., computing (microcontrollers, SoC etc.) are increasingly being integrated into everyday objects and our environment. We can imagine that in the not-too-far future, all surfaces in our environments may become displays, we will have 3d tracking of users and objects, and many of our objects and furniture may become shape-changing and robotic. There is a big question of how we might interact with and control such environments. This means significant challenges for the field of human-computer interaction, away from a personal computing/GUI and phone/tablet/multitouch paradigms. Mid-air gestures with continuous feedback are a promising modality for such environments.

In this talk, I will present challenges for human-computer interaction posed by this new development. I will also sketch possible roads towards addressing these challenges. Finally, I will visualize my approach through three case studies. The first case study involves gestural interaction with virtual sound sources hovering in mid-air. The second case study addresses transparency-controlled displays for collaborative settings. The third case study involves content presentation on very large (many meters) displays.