How can computers react better to their users in the future? How could they determine what their attention is focused on and what they intend to do next? This is to be achieved in the future through the use of intelligent user interfaces such as those developed by the Institute for Visualization and Interactive Systems (VIS) at the University of Stuttgart.
In computer science, the term “user interface” to describe the point at which a person comes into contact with a computer and performs an action. This applies to all operations involving the keyboard, screen or mouse as well as to other forms of program control, whether graphical, textual, auditory or otherwise. Equipping these interfaces with Artificial Intelligence (AI) and extending them with competences similar to human perception, makes computers better able to understand their users and interact with them in a human-like way. As attentive helpers, they could help people to better control their situation and avoid mistakes. Intelligent user interfaces are already being used in the automotive industry to detect driver drowsiness and remind them to take a break in good time. But even in the office or in everyday activities, computers will soon know where their users' attention is focused, what they need help with, or what they intend to do next.
What distinguishes human beings is their ability to put themselves in the shoes of others and, for example, to assess their emotional state, intentions or focus of attention based their body language. For computers to be able to do this in the future, they will need intelligent user interfaces to precisely measure and interpret users' intentions or attention. Machine learning and computer vision methods are used and are continuing to be developed for this purpose.
New generation of interfaces
Prof. Andreas Bulling, head of the Department of Human-Computer Interaction and Cognitive Systems at the Institute for Visualization and Interactive Systems (VIS) at the University of Stuttgart, is currently developing such intelligent user interfaces. The aim of his research is to develop a new generation of intelligent user interfaces that are closely oriented on interpersonal interactions. The rapid developments in computer technology over the past few years have also opened up many new possibilities for the design of human-computer interfaces. It benefits, for example, from very high computing power used in super computing and exploits the revolutionary developments in the field of AI.
Recognizing user behavior
For computers to be able to recognize the behavior and focus of attention of their users, they need as broad a data basis as possible. The data is recorded by cameras or other sensors, automatically analyzed and interpreted. The better the computer is able to understand how attentive the user is, the better it can react to them.
Machine Learning is used to evaluate these vast data volumes: intensive training enables computers to process hundreds of thousands of data sets on powerful graphics architectures and to make increasingly better predictions based on recurring patterns. One method for the computer to learn how to correctly classify the behavior and attention state of users uses the gaze estimation approach, in the development of which Andreas Bulling's faculty is playing a leading role. The computer uses cameras to measure the viewing direction and movements of users' eyes. The eye movements are recorded either by permanently installed or mobile cameras on the computer or integrated into laptops or mobile devices.
“The gaze estimation allows us to measure, for example, how often someone looks away from the screen during work, is distracted by a notification, or makes eye contact during a conversation. This gives us important clues about a person’s ability to concentrate, how well a person gets on with their conversation partners or even what personal attitudes they hold,” explains Andreas Bulling. Maintaining eye contact for a long time during a conversation signals interest or understanding to the other.
The gaze estimation allows us to measure, for example, how often someone looks away from the screen during work, is distracted by a notification, or makes eye contact during a conversation. This gives us important clues about a person’s ability to concentrate, how well a person gets on with their conversation partners or even what personal attitudes they hold.Prof. Dr. Andreas Bulling
Sensitive like a human
The next important step on the path to intelligent user interfaces is that they must also be able to recognize the user's intentions, especially in everyday situations. Computers should be able to put themselves in the user's shoes and know what he or she intends to do or needs next. Again the model for this is the human being. Our ability to make assumptions about the thought processes of others by analyzing where they focus their attention and their intentions, i.e., to form a “theory of mind”, is fundamental to interpersonal interactions.
Andreas Bulling is pursuing this goal in his new project, “Anticipate: Anticipatory Human-Computer Interaction”, for which he received an ERC-Starting Grant at the same time as he started at the University of Stuttgart , which is one of the highest researcher awards in Europe. The European Research Council will be providing a total of 1.5 million euro in funding the project over a five year period. A project is also being prepared within the framework of the “Simulation Technology” Cluster of Excellence (SimTech) at the University of Stuttgart, the object of which is to simulate human thought and perception processes using computers and to further improve the accuracy of predictions using deep learning architectures.