Conversations on Scalable Interaction Paradigms

The proliferation of everyday computing will accelerate until almost every object is equipped with processors, sensors, and actuators. We move from dedicated “computers” to distributed ensembles of computational devices, what we call pervasive computing environments (PCE). The shape of what is to come can already be seen in modern office spaces, production lines, operating theatres in hospitals, or smart homes.

In this program, we aim to discuss how users can efficiently, effectively, and enjoyably act through and interact with PCEs? As the boundaries between devices and the physical world melt away, users must be enabled to control whole ensembles of devices. And, how can users successfully transfer knowledge from one PCE to another? As their sheer number will make learning to interact with individual devices extremely difficult, users must be enabled to transfer interaction knowledge from one setting to another.

In this conversation series, we will discuss these challenges with international experts in the field. The conversations will take place in the winter semester 2021/2022 every Tuesday from 17:00-18:00 via an online conference tool. After a short presentation of our guest, the audience is invited to join the conversation moderated by a member of the Priority Programme.

 

Usable is not enough: towards of human-centred, positive and inclusive security
Prof. Dr. Angela Sasse, Tuesday, Feb. 1st, 1700 CET 

Bio: Dr. Angela Sasse is a computer scientist whose research spans the areas of human-computer interaction and computer security. She is Horst Görtz Endowed Professor of Human-Centred Security at Ruhr University Bochum and has a part-time position as Professor of Human-Centred Technology at University College London. Angela will conduct a conversation titled “Usable is not enough: towards human-centred positive and inclusive security.

Abstract Usable security as a research area can be traced back to 1999, the year when the seminal papers “Why Johnny Can’t Encrypt” and “Users are not the Enemy” were published. But in the practitioner discourse, humans are still portrayed as the “weakest link”, and much of the research effort has focused on “fixing” people through education and persuasion, or motivating them through fear. Human-centred security is changing the focus: paying more attention to marginalised user groups, giving users more agency, and engaging them in the design process. As important as good design is to foster the development of secure habits that work across contexts and empower users, rather than leaving them scared and helpless.

 

Photo credit ©Thorsten Mohr | Saarland Informatics Campus
Social Acceptability in HCI – Past, Present and Future.
Marion Koelle, Tuesday, Jan. 25th, 1700 CET 

Bio: Dr. Marion Koelle is a creative technologist and researcher in Human-Computer Interaction. Marion is currently a Post-Doc at University of Duisburg-Essen (UDE Campus Essen) and has a passion for designing and developing hardware, software and user experiences that adapt to social and societal needs and challenges.

Marion will conduct a conversation centering on methods for evaluating social acceptability and strategies for designing socially acceptable human-machine interactions.

HCI in Safety-Critical Spaces
Margareta Holtensdotter Lützhöft & Philippe Palanque, Tuesday, Jan. 18th, 1700 CET 

Bio: Dr. Margareta Lützhöft is a master mariner, trained at Kalmar Maritime Academy in Sweden. After leaving the sea, she studied for a Bachelor’s degree in Cognitive science and a Master’s in Computer Science. In 2004 she received a PhD in Human-Machine Interaction and was associate Professor at Chalmers University of technology and Professor of Nautical Studies at the University of Tasmania, Australia. Presently, she is holding a position as Professor in the MarSafe group at the Western Norway University of Applied Sciences, and leader of the MarCATCH Research Centre. Her research interests include human-centered design and the effects of new technology, and she has published in these and other areas relating to maritime safety.

Bio: Dr. Philippe Palanque is Professor in Computer Science at the University Toulouse 3 “Paul Sabatier” and is head of the Interactive Critical Systems group at the Institut de Recherche en Informatique de Toulouse (IRIT) in France. He is involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR programme which targets at building the future European air traffic management system. He has worked for more than 10 years on research projects to improve interactive Ground Segment Systems at the Centre National d’Etudes Spatiales (CNES) and is also involved in the development of software architectures and user interface modeling for interactive cockpits in large civil aircraft (funded by Airbus). Philippe received the ACM Lifetime Service Award in 2021.

Personality for Things
Michael Coldewey, Tuesday, Jan. 11th, 1700 CET 

Bio: Michael Coldewey is a Professor for Visual Effects (VFX) at the University of Television and Film in Munich (HFF München). Aside from his work in academia, he is also an Executive Producer, producer and CEO of COLD ‘N’ TOWN Productions GmbH based in Munich and Los Angeles. 

Michael uses his vast experience from working in the local and international film industry to teach VFX to students at HFF München.

Michael has been credited as Executive Producer VFX in popular films including Whitehouse Down, Iron Man 3, Marvel’s Avengers and Captain America – The first Avenger. 

Digital fabrication of new materials
Stefanie Mueller, Tuesday, Dec. 14th, 1700 CET 

Bio: Stefanie Mueller is an assistant professor in the MIT EECS department and a member of the Computer Science and Artificial Intelligence Laboratory. In her research, she develops novel hardware and software systems that advance personal fabrication technologies. Stefanie publishes her work at the most selective HCI venues CHI and UIST and has received a best paper award and two best paper nominees in the past. She is also serving on the CHI and UIST program committees as an associate chair. In addition, Stefanie has been an invited speaker at universities and research labs, such as Harvard, Stanford, UC Berkeley, CMU, Microsoft Research, Disney Research, and Adobe Research.

Stefanie directs the HCI Engineering group at CSAIL and is actively recruiting Postdocs, PhD students, and interns interested in helping to kickstart this new lab. Interested Postdocs can email her directly. For a PhD position please apply through MIT’s PhD admission page.

Multimodal Interaction for Immersive analytics
Francisco R. Ortega, Tuesday, Nov. 30th, 1700 CET

Bio: Francisco R. Ortega is an Assistant Professor at Colorado State University (CSU) and Director of the Natural User Interaction lab (NUILAB). Dr. Ortega earned his Ph.D. in Computer Science (CS) in the field of Human-Computer Interaction (HCI) and 3D User Interfaces (3DUI) from Florida International University (FIU). He also held the position of Post-Doc and Visiting Assistant Professor at FIU between February 2015 to July 2018. Broadly speaking, his research has focused on multimodal and unimodal interaction (gesture-centric), which includes gesture recognition and elicitation (e.g., a form of participatory design). His main research area focuses on improving user interaction by (a) multimodal elicitation, (b) developing interactive techniques, and (c) improving augmented reality visualization techniques. The primary domains for interaction include immersive analytics, assembly, Navy use cases, and collaborative environments using augmented reality headsets.

Dr. Ortega’s funding record comes from National Science Foundation (NSF), DARPA, and Office of Naval Research ONR), among others. He was a co-PI for the DARPA Communicating with Computers project (over 4 million dollars). He is currently a PI for a 3-year effort for ONR (PM: Dr. Peter Squire) titled Perceptual/Cognitive Aspects of Augmented Reality: Experimental Research and a Computational Model along co-PIs Dr. Chris Wickens and Dr. Ben Clegg.

Abstract: Dr. Ortega’s motivation comes from Dr. Weiser’s article about the computing of the 21st century, where the idea is to allow users to concentrate on daily tasks without the barriers posed by technology. Hence, making the computer invisible. Dr. Ortega’s early approach was concentrated on gestures, which will be described during this talk. From this knowledge, Dr. Ortega’s Natural User Interaction lab (NUILAB) has been working on gestures and multimodal interaction. One important question is which domain would provide the best-case scenario to generalize AR multimodal input modalities? Dr. Ortega and his lab have concentrated on Immersive Analytics (i.e., 3D visualization in stereoscopic rendering). During the talk, Dr. Ortega will talk about the existing challenges, preliminary results, and the way forward. Today, more than ever, the saying “There is no single silver bullet” for interaction remains at the heart of Dr. Ortega’s research on multimodal interaction. 

Interaction in 3D Spaces
Robert J. Teather, Tuesday, Nov. 23th, 1700 CET

Bio: Robert Teather is an associate professor in the School of Information Technology at Carleton University. He previously worked as a postdoctoral fellow on the G-Scale project with Dr. Jacques Carette. He is currently recruiting graduate students in human-computer interaction and interactive digital media at Carleton.

Abstract: Virtual reality (VR) has recently become popular again with the release of low-cost and effective consumer-grade head-mounted displays such as the Oculus Rift. The longstanding dream of VR has users interacting with virtual objects as naturally as real ones. In practice, despite technological advances, numerous technical and human factors make this difficult. Modern VR interaction continues to employ naturally-inspired interaction techniques that have changed little since their introduction in the late 80s. Similarly, cybersickness and the lack of tactile feedback when interacting with virtual objects are well-known to limit the effectiveness of VR systems, yet these issues persist today. In this talk, I will discuss my research addressing these three interrelated areas of virtual reality interaction. I will first describe my studies comparing 3D selection interfaces between 3D and desktop systems, and my work in extending a standardized methodology to support fair and direct comparison between these two different modalities. I will then discuss my research group’s recent work employing this standardized methodology for evaluating novel 3D selection methods, as well as other projects aimed at enhancing the usability of VR systems through evaluating the effectiveness of cybersickness reduction techniques and novel approaches to VR haptics that employ shape-changing devices and perceptual illusions.

 

Real-Time Eye Tracking: Applications, Analytics, Implementation
Andrew Duchowski, Tuesday, Nov. 16th, 1700 CET

Bio: Dr. Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Canada, and doctorate (1997) from Texas A&M University, USA, both in Computer Science. His research and teaching interests include visual attention and perception, computer vision, and computer graphics. He is a noted research leader in the field of eye tracking, having produced a corpus of related papers and a monograph on eye tracking methodology, and has delivered courses and seminars on the subject at international conferences. He maintains the eyeCU, Clemson’s eye tracking laboratory and teaches a regular course on eye tracking methodology attracting students from various disciplines across campus.

Abstract: Starting with an overview of eye-tracking applications, interactive applications are reviewed: assistive (gaze-responsive), active (selection, look to shoot), passive (foveated rendering, a.k.a. gaze-contingent displays), and expressive (gaze synthesis). These applications expose the need for advanced real-time eye movement processing beyond the current state-of-the-art. Offline event detection via velocity-based filtering or position-variance approaches seem inadequate in online settings. Suggestions are made regarding possible approaches to event detection, ambient/focal attention modeling, smooth pursuit tracking and interaction, and real-time pupillometric measures, e.g., of cognitive load. Interactive Python demonstrations will be given of potential real-time approaches to illustrate present limitations and directions for future development.

“My Work In Gesture: Design, Recognition & Open Questions

Jacob Wobbrock, Tuesday, Nov. 9th, 1700 CET

Bio: Jacob Wobbrock is a Professor of human-computer interaction (HCI) in The Information School and, by courtesy, in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he directs the ACE Lab and co-directs the CREATE Center. He is also a founding member of the DUB Group and the MHCI+D degree. His Ph.D. students come from information science and computer science.

His research seeks to scientifically understand people’s experiences of computers and information, and to improve those experiences by inventing and evaluating new interactive technologies, especially for people with disabilities. His specific research topics include input and interaction techniques, human performance measurement and modeling, HCI research and design methods, mobile computing, and accessible computing.
Homepage
Twitter Account

Gaze for Interaction with Ubiquitous Computing Systems: Basics, Foundations, Advances, and Challenges.

Enkelejda Kasneci and Hans Gellersen, Tuesday, Nov. 2nd, 1700 CET

Bio: Enkelejda Kasneci is a full professor of Computer Science at the University of Tübingen, Germany, where she leads the Human-Computer Interaction Group. As a BOSCH-scholar, she received her M.Sc. degree in Computer Science from the University of Stuttgart in 2007. In 2013, she received her PhD in Computer Science from the University of Tübingen. For her PhD research, she was awarded the research prize of the Federation Südwestmetall in 2014. From 2013 to 2015, she was a Margarete-von-Wrangell Fellow. Her main research interests are applied machine learning, eye-tracking technology and applications. She is a reviewer and PC member for several journals and major conferences. In 2016, she founded LeadersLikeHer, the world’s first open career network for women from industrial, research and public organizations.
Homepage
Twitter Account

Bio: Hans Gellersen is a full professor in the department of computing and communications at Lancaster University. His interest is in HCI, human interface technology, and the design of novel sensing and interaction techniques for anything from smart devices to AR/VR. In particularly he interested in eye movement and recently won an ERC Advanced Grant to investigate new foundations in for gaze and gestural interaction. Over the last ten years, his group has contributed major innovations on gaze in HCI, notably smooth pursuit interfaces and techniques, gaze-supported manual input, and eye-head interaction. Recently he also investigated interaction in 3D but he maintains long-standing interests in ubiquitous computing, cross-device interaction and interfaces that blend the digital and the virtual.
Homepage
Twitter Account

Digital Touch: Somatic Symbiosis, Correspondence, Alterity, or Monster

Kristina Höök, Tuesday, Oct. 26th, 1700 CEST

Bio: Kristina Höök is a professor in Interaction Design at
KTH Royal Institute of Technology in Stockholm
(and used to be the director of the Mobile Life centre).  Her research interests include affective interaction, somaesthetic design, internet of things and anything that makes life with technology more meaningful, enjoyable, creative and aesthetically appealing. 

Abstract
Three recent technical and societal developments are challenging the existing ideals of interaction design, namely the move towards hybrid physical/digital materials, the emergence of an increasingly complex and fluid digital ecology, and the increasing proportion of autonomous or partially autonomous systems changing their behavior over time and with use. These challenges in turn motivate us to propose three directions in which new ideals for interaction design might be sought: the first is to go beyond the language-body divide that implicitly frames most of our current understandings of experience and meaning, the second is to extend the scope of interaction design from individual interfaces to the complex socio-technical fabric of human and nonhuman actors, and the third is to go beyond predictability by learning to design with machine learning. This had lead to a rise of novel theoretical positions on how to define our relationship to smart objects, autonomous technologies, infrastructures or wearables. Theories such as postphenomenological, pragmatist, somaesthetics, feminist and sociological theories propose other engagements, other ways of interacting, sometimes decentring the “human”, sometimes enriching or questioning what can be understood as the category “human”. The postphenomenological position is that embodiment is but one of the possible interactions: engaging with an alterity or entering into a hermeneutic relationship could be other ways of framing our interactions with technology. In somaesthetics and different feminist theories, pluralist, soma-grounded understanding of the human condition leads to richer, complex somasensory entanglements with technologies.  Theories of correspondence helps us shift beyond the idea of dialogue or interaction. Sometimes, the experience is best framed as in the humanities: as a monster

The talk is based on:

  • Höök, K. (2018). Designing with the body: Somaesthetic interaction design. MIT Press.
  • Höök, K., Löwgren, J. (2021). Characterizing interaction design by its ideals: A discipline in transition. She Ji: The Journal of Design, Economics and Innovation. Accepted for publication.
  • Karpashevich, P., Sanches, P., Cotton, K., Garrett, R., Luft, Y., Tsaknaki, V., and Höök, K.; (forthcoming, 2022) Touching Our Breathing through Shape-Change: Other, Cyborg or Twisted Mirror, Accepted to TOCHI’s special issue on Digital Touch, forthcoming, 2022.
  • Kristina Höök, Steve Benford, Paul Tennent, Vasiliki Tsaknaki, Miquel Alfaras, Juan Martinez Avila, Christine Li, Joseph Marshall, Claudia Daudén Roquet, Pedro Sanches, Anna Ståhl, Muhammad Umair, Charles Windlin, and Feng Zhou. 2021. Unpacking non-dualistic design: the soma design case. (December 2021), 35 pages