Robotic gaze control using reinforcement learning

Martin Rothbucher, Christian Denk, Klaus Diepold

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This work examines how adaptive control can learn to point a camera at the active speaker in a conversation by using a Reinforcement Learning approach with audio and video data. A motivating scenario for this problem is a robotic platform that interacts with people around its environment. Using Reinforcement Learning, the task is specified with an observable objective referred to as the reward signal. Specifying this task with a reward signal enables an adaptive controller to improve its performance with experience. The reward for this task is generated by a visual feedback from the conversation participants that is detected by the robot's camera system. Multiple experiments have been performed on a robot system with audiovisual data to examine the feasibility and potential of this approach. Our experimental results demonstrate that the system learns very fast to identify the active speakers. Furthermore, our approach inherently learns how to deal with egonoise that originates from the robot's motor or background noise from the environment.

Original languageEnglish
Title of host publicationProceedings - 2012 IEEE Symposium on Haptic Audio-Visual Environments and Games, HAVE 2012
Pages83-88
Number of pages6
DOIs
StatePublished - 2012
Event11th International Symposium on Haptic Audio-Visual Environments and Games, HAVE 2012 - Munich, Germany
Duration: 8 Oct 20129 Oct 2012

Publication series

NameProceedings - 2012 IEEE Symposium on Haptic Audio-Visual Environments and Games, HAVE 2012

Conference

Conference11th International Symposium on Haptic Audio-Visual Environments and Games, HAVE 2012
Country/TerritoryGermany
CityMunich
Period8/10/129/10/12

Fingerprint

Dive into the research topics of 'Robotic gaze control using reinforcement learning'. Together they form a unique fingerprint.

Cite this