TY - GEN
T1 - Object-Centric Grasping Transferability
T2 - 2022 IEEE-RAS 21st International Conference on Humanoid Robots, Humanoids 2022
AU - Hidalgo-Carvajal, Diego
AU - Valle, Carlos Magno C.O.
AU - Naceri, Abdeldjallil
AU - Haddadin, Sami
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Attaining human hand manipulation capabilities is a sought-after goal of robotic manipulation. Several works have focused on understanding and applying human manipulation insights in robotic applications. However, few considered objects as central pieces to increase the generalization properties of existing methods. In this study, we explore context-based grasping information transferability between objects by using mesh-based object representations. To do so, we empirically labeled, in a mesh point-wise manner, 10 grasping postures onto a set of 12 purposely selected objects. Subsequently, we trained our convolutional neural network (CNN) based architecture with the mesh representation of a single object, associating grasping postures to its local regions. We tested our network across multiple objects of distinct similarity values. Results show that our network can successfully estimate non-feasible grasping regions as well as feasible grasping postures. Our results suggest the existence of an abstract relation between the predicted context-based grasping postures and the geometrical properties of both the training and test objects. Our proposed approach aims to expand grasp learning research by linking local segmented meshes to postures. Such a concept can be applied to grasp new objects using anthropomorphic robot hands.
AB - Attaining human hand manipulation capabilities is a sought-after goal of robotic manipulation. Several works have focused on understanding and applying human manipulation insights in robotic applications. However, few considered objects as central pieces to increase the generalization properties of existing methods. In this study, we explore context-based grasping information transferability between objects by using mesh-based object representations. To do so, we empirically labeled, in a mesh point-wise manner, 10 grasping postures onto a set of 12 purposely selected objects. Subsequently, we trained our convolutional neural network (CNN) based architecture with the mesh representation of a single object, associating grasping postures to its local regions. We tested our network across multiple objects of distinct similarity values. Results show that our network can successfully estimate non-feasible grasping regions as well as feasible grasping postures. Our results suggest the existence of an abstract relation between the predicted context-based grasping postures and the geometrical properties of both the training and test objects. Our proposed approach aims to expand grasp learning research by linking local segmented meshes to postures. Such a concept can be applied to grasp new objects using anthropomorphic robot hands.
UR - http://www.scopus.com/inward/record.url?scp=85146319232&partnerID=8YFLogxK
U2 - 10.1109/Humanoids53995.2022.10000192
DO - 10.1109/Humanoids53995.2022.10000192
M3 - Conference contribution
AN - SCOPUS:85146319232
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 659
EP - 666
BT - 2022 IEEE-RAS 21st International Conference on Humanoid Robots, Humanoids 2022
PB - IEEE Computer Society
Y2 - 28 November 2022 through 30 November 2022
ER -