SPP2199 Research Areas

To make interaction on larger ensembles scalable in pervasive environments, three areas of interest and research questions must be answered in this priority programme’s scope.

Research Area 1: Designing Scalable Interaction Paradigms for Pervasive Environments

  • Design of efficient and meaningful scalable interaction paradigms: How do existing interaction paradigms scale to pervasive computing environments? What are the characteristics of interaction paradigms that can be used across devices and domains? How to ensure that interaction paradigms are can be used independently of the context but still consider the context-induced restrictions? Are there fundamental limitations preventing adopting a single pervasive interaction paradigm? How to address issues of efficiency as well as broader aspects of meaning through these interaction paradigms?

Research Area 2: Methods to Study Interaction Paradigms in Pervasive Computing Environments:

  • Rigorous and robust evaluation of scalable interaction paradigms: How to evaluate interaction techniques that are supposed to work across a range of devices and domains? Can there be standardized study methods to evaluate interaction paradigms for pervasive computing environments? What are the methods to evaluate interaction paradigms in-situ? How far can we extend unsupervised observation techniques by modern sensor technology to reach reliable understanding of the usage of pervasive computing environments? Can model-based simulation of user interaction speed up the design phase and enable to select promising interaction designs early in the design process?

Research Area 3: Metrics and Models to Evaluate Future Pervasive Interactive Systems

  • Assessment of the success of interaction paradigms: What are the metrics that measure and describe actual success, effectiveness, and satisfaction in pervasive computing environments? What is the score and value under which we rate a design effective and efficient but also meaningful and pleasant for an individual? What is a good balance between traditional performance metrics such as task performance and error rate versus user experience, joy of use, and well-being? What are meaningful testbeds to verify the results?