Conversations on Scalable Interaction Paradigms

The proliferation of everyday computing will accelerate until almost every object is equipped with processors, sensors, and actuators. We move from dedicated “computers” to distributed ensembles of computational devices, what we call pervasive computing environments (PCE). The shape of what is to come can already be seen in modern office spaces, production lines, operating theatres in hospitals, or smart homes.

In this program, we aim to discuss how users can efficiently, effectively, and enjoyably act through and interact with PCEs? As the boundaries between devices and the physical world melt away, users must be enabled to control whole ensembles of devices. And, how can users successfully transfer knowledge from one PCE to another? As their sheer number will make learning to interact with individual devices extremely difficult, users must be enabled to transfer interaction knowledge from one setting to another.

In this conversation series, we will discuss these challenges with international experts in the field. The conversations will take place in the winter semester 2021/2022 every Tuesday from 17:00-18:00 via an online conference tool. After a short presentation of our guest, the audience is invited to join the conversation moderated by a member of the Priority Programme.

Upcoming conversations

 

Digital fabrication of new materials
Stephanie Müller, Tuesday, Dec. 14th, 1700 CET 

Bio: Stefanie Mueller is an assistant professor in the MIT EECS department and a member of the Computer Science and Artificial Intelligence Laboratory. In her research, she develops novel hardware and software systems that advance personal fabrication technologies. Stefanie publishes her work at the most selective HCI venues CHI and UIST and has received a best paper award and two best paper nominees in the past. She is also serving on the CHI and UIST program committees as an associate chair. In addition, Stefanie has been an invited speaker at universities and research labs, such as Harvard, Stanford, UC Berkeley, CMU, Microsoft Research, Disney Research, and Adobe Research.

Stefanie directs the HCI Engineering group at CSAIL and is actively recruiting Postdocs, PhD students, and interns interested in helping to kickstart this new lab. Interested Postdocs can email her directly. For a PhD position please apply through MIT’s PhD admission page.

Meeting link: TBC

Past Events

Multimodal Interaction for Immersive analytics
Francisco R. Ortega, Tuesday, Nov. 30th, 1700 CET

Abstract: Dr. Ortega’s motivation comes from Dr. Weiser’s article about the computing of the 21st century, where the idea is to allow users to concentrate on daily tasks without the barriers posed by technology. Hence, making the computer invisible. Dr. Ortega’s early approach was concentrated on gestures, which will be described during this talk. From this knowledge, Dr. Ortega’s Natural User Interaction lab (NUILAB) has been working on gestures and multimodal interaction. One important question is which domain would provide the best-case scenario to generalize AR multimodal input modalities? Dr. Ortega and his lab have concentrated on Immersive Analytics (i.e., 3D visualization in stereoscopic rendering). During the talk, Dr. Ortega will talk about the existing challenges, preliminary results, and the way forward. Today, more than ever, the saying “There is no single silver bullet” for interaction remains at the heart of Dr. Ortega’s research on multimodal interaction. 

Bio: Francisco R. Ortega is an Assistant Professor at Colorado State University (CSU) and Director of the Natural User Interaction lab (NUILAB). Dr. Ortega earned his Ph.D. in Computer Science (CS) in the field of Human-Computer Interaction (HCI) and 3D User Interfaces (3DUI) from Florida International University (FIU). He also held the position of Post-Doc and Visiting Assistant Professor at FIU between February 2015 to July 2018. Broadly speaking, his research has focused on multimodal and unimodal interaction (gesture-centric), which includes gesture recognition and elicitation (e.g., a form of participatory design). His main research area focuses on improving user interaction by (a) multimodal elicitation, (b) developing interactive techniques, and (c) improving augmented reality visualization techniques. The primary domains for interaction include immersive analytics, assembly, Navy use cases, and collaborative environments using augmented reality headsets.

Dr. Ortega’s funding record comes from National Science Foundation (NSF), DARPA, and Office of Naval Research ONR), among others. He was a co-PI for the DARPA Communicating with Computers project (over 4 million dollars). He is currently a PI for a 3-year effort for ONR (PM: Dr. Peter Squire) titled Perceptual/Cognitive Aspects of Augmented Reality: Experimental Research and a Computational Model along co-PIs Dr. Chris Wickens and Dr. Ben Clegg.

Interaction in 3D Spaces
Robert J. Teather, Tuesday, Nov. 23th, 1700 CET

Abstract: Virtual reality (VR) has recently become popular again with the release of low-cost and effective consumer-grade head-mounted displays such as the Oculus Rift. The longstanding dream of VR has users interacting with virtual objects as naturally as real ones. In practice, despite technological advances, numerous technical and human factors make this difficult. Modern VR interaction continues to employ naturally-inspired interaction techniques that have changed little since their introduction in the late 80s. Similarly, cybersickness and the lack of tactile feedback when interacting with virtual objects are well-known to limit the effectiveness of VR systems, yet these issues persist today. In this talk, I will discuss my research addressing these three interrelated areas of virtual reality interaction. I will first describe my studies comparing 3D selection interfaces between 3D and desktop systems, and my work in extending a standardized methodology to support fair and direct comparison between these two different modalities. I will then discuss my research group’s recent work employing this standardized methodology for evaluating novel 3D selection methods, as well as other projects aimed at enhancing the usability of VR systems through evaluating the effectiveness of cybersickness reduction techniques and novel approaches to VR haptics that employ shape-changing devices and perceptual illusions.

Bio: Robert Teather is an associate professor in the School of Information Technology at Carleton University. He previously worked as a postdoctoral fellow on the G-Scale project with Dr. Jacques Carette. He is currently recruiting graduate students in human-computer interaction and interactive digital media at Carleton.

 

Real-Time Eye Tracking: Applications, Analytics, Implementation
Andrew Duchowski, Tuesday, Nov. 16th, 1700 CET

Abstract: Starting with an overview of eye-tracking applications, interactive applications are reviewed: assistive (gaze-responsive), active (selection, look to shoot), passive (foveated rendering, a.k.a. gaze-contingent displays), and expressive (gaze synthesis). These applications expose the need for advanced real-time eye movement processing beyond the current state-of-the-art. Offline event detection via velocity-based filtering or position-variance approaches seem inadequate in online settings. Suggestions are made regarding possible approaches to event detection, ambient/focal attention modeling, smooth pursuit tracking and interaction, and real-time pupillometric measures, e.g., of cognitive load. Interactive Python demonstrations will be given of potential real-time approaches to illustrate present limitations and directions for future development.

Bio: Dr. Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Canada, and doctorate (1997) from Texas A&M University, USA, both in Computer Science. His research and teaching interests include visual attention and perception, computer vision, and computer graphics. He is a noted research leader in the field of eye tracking, having produced a corpus of related papers and a monograph on eye tracking methodology, and has delivered courses and seminars on the subject at international conferences. He maintains the eyeCU, Clemson’s eye tracking laboratory and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus.

“My Work In Gesture: Design, Recognition & Open Questions

Jacob Wobbrock, Tuesday, Nov. 9th, 1700 CET

Jacob Wobbrock is a Professor of human-computer interaction (HCI) in The Information School and, by courtesy, in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he directs the ACE Lab and co-directs the CREATE Center. He is also a founding member of the DUB Group and the MHCI+D degree. His Ph.D. students come from information science and computer science.

His research seeks to scientifically understand people’s experiences of computers and information, and to improve those experiences by inventing and evaluating new interactive technologies, especially for people with disabilities. His specific research topics include input and interaction techniques, human performance measurement and modeling, HCI research and design methods, mobile computing, and accessible computing.
https://faculty.washington.edu/wobbrock/
https://twitter.com/wobbrockjo

Gaze for Interaction with Ubiquitous Computing Systems: Basics, Foundations, Advances, and Challenges.

ENKELEJDA KASNECI and HAns Gellersen, Tuesday, Nov. 2nd, 1700 CET

Enkelejda Kasneci is a full professor of Computer Science at the University of Tübingen, Germany, where she leads the Human-Computer Interaction Group. As a BOSCH-scholar, she received her M.Sc. degree in Computer Science from the University of Stuttgart in 2007. In 2013, she received her PhD in Computer Science from the University of Tübingen. For her PhD research, she was awarded the research prize of the Federation Südwestmetall in 2014. From 2013 to 2015, she was a Margarete-von-Wrangell Fellow. Her main research interests are applied machine learning, eye-tracking technology and applications. She serves as a reviewer and PC member for several journals and major conferences. In 2016, she founded LeadersLikeHer, the world’s first open career network for women from industrial, research and public organizations.
https://www.hci.uni-tuebingen.de/chair/team/enkelejda-kasneci
https://twitter.com/enkelejdakasne1

Hans Gellersen is a full professor in the department of computing and communications at Lancaster University. His interest is in HCI, human interface technology, and the design of novel sensing and interaction techniques for anything from smart devices to AR/VR. In particularly he interested in eye movement and recently won an ERC Advanced Grant to investigate new foundations in for gaze and gestural interaction. Over the last ten years, his group has contributed major innovations on gaze in HCI, notably on smooth pursuit interfaces and techniques, gaze-supported manual input, and eye-head interaction. Recently he also investigated interaction in 3D but he maintains long-standing interests in ubiquitous computing, cross-device interaction and interfaces that blend the digital and the virtual.
https://www.lancaster.ac.uk/scc/about-us/people/hans-gellersen
https://twitter.com/HansGellersen

Digital Touch: Somatic Symbiosis, Correspondence, Alterity, or Monster

Kristina Höök, Tuesday, Oct. 26th, 1700 CEST

Kristina Höök is a professor in Interaction Design at
KTH Royal Institute of Technology in Stockholm
(and used to be the director of the Mobile Life centre).  Her research interests include affective interaction, somaesthetic design, internet of things and anything that makes life with technology more meaningful, enjoyable, creative and aesthetically appealing. 

Abstract
Three recent technical and societal developments are challenging the existing ideals of interaction design, namely the move towards hybrid physical/digital materials, the emergence of an increasingly complex and fluid digital ecology, and the increasing proportion of autonomous or partially autonomous systems changing their behavior over time and with use. These challenges in turn motivate us to propose three directions in which new ideals for interaction design might be sought: the first is to go beyond the language-body divide that implicitly frames most of our current understandings of experience and meaning, the second is to extend the scope of interaction design from individual interfaces to the complex socio-technical fabric of human and nonhuman actors, and the third is to go beyond predictability by learning to design with machine learning. This had lead to a rise of novel theoretical positions on how to define our relationship to smart objects, autonomous technologies, infrastructures or wearables. Theories such as postphenomenological, pragmatist, somaesthetics, feminist and sociological theories propose other engagements, other ways of interacting, sometimes decentring the “human”, sometimes enriching or questioning what can be understood as the category “human”. The postphenomenological position is that embodiment is but one of the possible interactions: engaging with an alterity or entering into a hermeneutic relationship could be other ways of framing our interactions with technology. In somaesthetics and different feminist theories, pluralist, soma-grounded understanding of the human condition leads to richer, complex somasensory entanglements with technologies.  Theories of correspondence helps us shift beyond the idea of dialogue or interaction. Sometimes, the experience is best framed as in the humanities: as a monster

The talk is based on:

  • Höök, K. (2018). Designing with the body: Somaesthetic interaction design. MIT Press.
  • Höök, K., Löwgren, J. (2021). Characterizing interaction design by its ideals: A discipline in transition. She Ji: The Journal of Design, Economics and Innovation. Accepted for publication.
  • Karpashevich, P., Sanches, P., Cotton, K., Garrett, R., Luft, Y., Tsaknaki, V., and Höök, K.; (forthcoming, 2022) Touching Our Breathing through Shape-Change: Other, Cyborg or Twisted Mirror, Accepted to TOCHI’s special issue on Digital Touch, forthcoming, 2022.
  • Kristina Höök, Steve Benford, Paul Tennent, Vasiliki Tsaknaki, Miquel Alfaras, Juan Martinez Avila, Christine Li, Joseph Marshall, Claudia Daudén Roquet, Pedro Sanches, Anna Ståhl, Muhammad Umair, Charles Windlin, and Feng Zhou. 2021. Unpacking non-dualistic design: the soma design case. (December 2021), 35 pages.