SimGest -- Simulation of scalable gesture-based human-machine interaction

Principal Investigators

Dr. Marc Hesenius, University of Duisburg-Essen (Homepage)
Prof. Dr. Stefan Schneegass, University of Duisburg-Essen (Homepage)

Gesture control is a type of human-computer interaction in which no pointer is moved to a virtual object, but the movement of the user itself represents the interaction. Today, surface gestures are a common interaction modality on mobile phones and large interactive whiteboards, but with advances in virtual and augmented reality and the increasing availability of head-mounted displays, spatial gestures are becoming increasingly important. In particular, use cases that cover unusual scenarios impose additional constraints on gesture design and require the creation of an interaction concept that takes contextual factors and other aspects into account. However, software developers face a variety of challenges when developing gesture-based applications. Gestures must be recognizable, i.e. the user’s movements must be matched with known gestures and the application must react accordingly. Gestures must be robust, meaning that the application’s response must not be affected by differences in performance between different users. Gestures also need to fit the device, with various factors such as how the device is held playing a role. Above all, however, the gestures must fit the user, i.e. ergonomics, memorability and semantic ambiguity of the gestures as well as motor skills must be taken into account during development, as they strongly influence the individual user experience. And gestures must scale and adapt to the user’s context and environment, i.e. the current situation the user is in, e.g. social environment, location or current task. These challenges must not only be met with suitable development tools: Above all, they require intensive testing of the application in general and the interaction modalities in particular. Interaction testing also involves generating inputs and examining outputs, i.e. performing real gestures and checking whether the application responds as expected. Since manual testing is time-consuming and expensive, automating the tests is desirable, but requires gesture simulation capabilities that are not available to the necessary extent. Since gestures are blurred, gesture simulation needs to generate gestures that are distorted in view of the variations arising from different user groups and the aspects mentioned above. And as the state of the art advances to more sophisticated types of gesture-based interaction, e.g. B. with intelligent textiles or other devices, the more the testing options need to scale with gestures.