Cross-modal visuo-tactile object recognition using robotic active exploration

Pietro Falco, Shuang Lu, Andrea Cirillo, Ciro Natale, Salvatore Pirozzi, Dongheui Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

45 Scopus citations

Abstract

In this work, we propose a framework to deal with cross-modal visuo-tactile object recognition. By cross-modal visuo-tactile object recognition, we mean that the object recognition algorithm is trained only with visual data and is able to recognize objects leveraging only tactile perception. The proposed cross-modal framework is constituted by three main elements. The first is a unified representation of visual and tactile data, which is suitable for cross-modal perception. The second is a set of features able to encode the chosen representation for classification applications. The third is a supervised learning algorithm, which takes advantage of the chosen descriptor. In order to show the results of our approach, we performed experiments with 15 objects common in domestic and industrial environments. Moreover, we compare the performance of the proposed framework with the performance of 10 humans in a simple cross-modal recognition task.

Original languageEnglish
Title of host publicationICRA 2017 - IEEE International Conference on Robotics and Automation
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5273-5280
Number of pages8
ISBN (Electronic)9781509046331
DOIs
StatePublished - 21 Jul 2017
Externally publishedYes
Event2017 IEEE International Conference on Robotics and Automation, ICRA 2017 - Singapore, Singapore
Duration: 29 May 20173 Jun 2017

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Conference

Conference2017 IEEE International Conference on Robotics and Automation, ICRA 2017
Country/TerritorySingapore
CitySingapore
Period29/05/173/06/17

Fingerprint

Dive into the research topics of 'Cross-modal visuo-tactile object recognition using robotic active exploration'. Together they form a unique fingerprint.

Cite this