MultiMediate: Multi-modal Group Behaviour Analysis for Artificial Mediation

On the interplay between spontaneous spoken instructions and human visual behaviour in an indoor guidance task

Nikolina Koleva, Sabrina Hoppe, Mohammed Mehdi Moniri, Maria Staudte, Andreas Bulling

Proc. Annual Meeting of the Cognitive Science Society (CogSci), 2015.


Abstract

We report on an indoor guidance study to explore the inter- play between spontaneous spoken instructions and listeners’ eye movement behaviour. The study involves a remote speaker (the instructor) to verbally guide a listener (the walker) to com- plete nine everyday tasks in different locations inside a room. We collect a multi-modal dataset of 12 pairs of users consist- ing of egocentric videos from the listener’s perspective, their gaze data, and instructors’ verbal instructions. We analyse the impact on instructions and listener gaze when the speaker can see 1) only the egocentric video, 2) the video and the point of gaze, or 3) the video and gaze with artificial noise. Our re- sults show that gaze behaviour varies significantly after (but hardly before) instructions and that speakers give more nega- tive feedback when listener gaze is available. These findings suggest that although speakers use gaze information as an in- dication of what referent the listener is effectively considering, this does not lead listeners to deliberately use their gaze as a pointer even when this is potentially beneficial for the task.

Links


BibTeX

@inproceedings{koleva15_cogsci, title = {On the interplay between spontaneous spoken instructions and human visual behaviour in an indoor guidance task}, author = {Koleva, Nikolina and Hoppe, Sabrina and Moniri, Mohammed Mehdi and Staudte, Maria and Bulling, Andreas}, year = {2015}, booktitle = {Proc. Annual Meeting of the Cognitive Science Society (CogSci)} }