Abstract
We address self-perception in robots as the key for world understanding and causality interpretation. We present a self-perception mechanism that enables a humanoid robot to understand certain sensory changes caused by naive actions during interaction with objects. Visual, proprioceptive and tactile cues are combined via artificial attention and probabilistic reasoning to permit the robot to discern between inbody and outbody sources in the scene. With that support and exploiting intermodal sensory contingencies, the robot can infer simple concepts such as discovering potential 'usable' objects. Theoretically and through experimentation with a real humanoid robot, we show how self-perception is a backdrop ability for high order cognitive skills. Moreover, we present a novel model for self-detection, which does not need to track the body parts. Furthermore, results show that the proposed approach successfully discovers objects in the reaching space, improving scene understanding by discriminating real objects from visual artifacts.
| Original language | English |
|---|---|
| Article number | 7740933 |
| Pages (from-to) | 100-112 |
| Number of pages | 13 |
| Journal | IEEE Transactions on Cognitive and Developmental Systems |
| Volume | 9 |
| Issue number | 2 |
| DOIs | |
| State | Published - Jun 2017 |
Keywords
- Conceptual inference
- embodied cognition
- multimodal integration
- self-detection
- self-perception
- sensorimotor contingencies (SMCs)
Fingerprint
Dive into the research topics of 'Yielding Self-Perception in Robots Through Sensorimotor Contingencies'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver