TY - GEN
T1 - Cross-modal visuo-tactile object recognition using robotic active exploration
AU - Falco, Pietro
AU - Lu, Shuang
AU - Cirillo, Andrea
AU - Natale, Ciro
AU - Pirozzi, Salvatore
AU - Lee, Dongheui
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/21
Y1 - 2017/7/21
N2 - In this work, we propose a framework to deal with cross-modal visuo-tactile object recognition. By cross-modal visuo-tactile object recognition, we mean that the object recognition algorithm is trained only with visual data and is able to recognize objects leveraging only tactile perception. The proposed cross-modal framework is constituted by three main elements. The first is a unified representation of visual and tactile data, which is suitable for cross-modal perception. The second is a set of features able to encode the chosen representation for classification applications. The third is a supervised learning algorithm, which takes advantage of the chosen descriptor. In order to show the results of our approach, we performed experiments with 15 objects common in domestic and industrial environments. Moreover, we compare the performance of the proposed framework with the performance of 10 humans in a simple cross-modal recognition task.
AB - In this work, we propose a framework to deal with cross-modal visuo-tactile object recognition. By cross-modal visuo-tactile object recognition, we mean that the object recognition algorithm is trained only with visual data and is able to recognize objects leveraging only tactile perception. The proposed cross-modal framework is constituted by three main elements. The first is a unified representation of visual and tactile data, which is suitable for cross-modal perception. The second is a set of features able to encode the chosen representation for classification applications. The third is a supervised learning algorithm, which takes advantage of the chosen descriptor. In order to show the results of our approach, we performed experiments with 15 objects common in domestic and industrial environments. Moreover, we compare the performance of the proposed framework with the performance of 10 humans in a simple cross-modal recognition task.
UR - http://www.scopus.com/inward/record.url?scp=85027987921&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2017.7989619
DO - 10.1109/ICRA.2017.7989619
M3 - Conference contribution
AN - SCOPUS:85027987921
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 5273
EP - 5280
BT - ICRA 2017 - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Robotics and Automation, ICRA 2017
Y2 - 29 May 2017 through 3 June 2017
ER -