Towards interfaces that distinguish user’s visual attention
What is your research about?
My research involves designing and evaluating human–computer interfaces that are sensitive to their users' visual attention. As humans, we are constrained by what we can attend to in the environment at a given time. For example, when reading a document we might only have time to read parts of it or we can be in situations in which we cannot properly attend to a computer screen because we need to look elsewhere in the environment. However, current user interfaces often do not register such contextual information and assume that all user actions are conducted with users' full attention.
During my dissertation work, I created interfaces that are able to understand when a user is not visually attending to a portion of the interface and proposed interaction techniques that compensate for users' lack of attention. I used head orientation and eye movement data provided by eye tracking devices, which provide much more granular information about visual attention than what can be inferred from mouse or finger actions.
What is important in it?
My main contribution is a framework for handling user input in low attention situations that is based on the use cases of interaction with large touch screens and collocated collaborative interaction. I have observed that interaction methods that compensate for users' low visual attention can allow them to more easily split their attention and do tasks concurrently. Yet, in some cases the advantage of concurrent user action can partly be offset by decreased performance of a single action or the uncertainty introduced by adaptive system behavior, which can sometimes be unpredictable for the users.
What benefit could it bring in practice?
The interfaces that adapt to users’ visual attention can have many applications, ranging from mobile notifications to search systems and human-AI collaborations. To give a few examples, in a human-AI collaboration scenario the system can understand the extent the user is aware of the information displayed and can point to important pieces of information that the user might have missed before making a decision. Or, in a mobile interface, the system can have a much better sense of whether a user has actually read a notification or not, and can choose to display or dismiss the notification accordingly.
To generalize from these individual examples, I consider my work as part of an on-going shift in human–computer interaction towards interfaces that more accurately model the cognitive and perceptual processes behind user actions. This comes with many opportunities for improving the user experience. At the same time, questions around privacy as well as ownership and ethical use of data are becoming more important than ever.
Baris Serim defended his thesis Adapting Interaction Based on Users' Visual Attention in Aalto University 11 June.
- Published:
- Updated: