Fusing joint measurements and visual features for In-Hand object pose estimation

Martin Pfanne, Maxime Chalon, Freek Stulp, Alin Albu-Schaffer

Research output: Contribution to journalArticlepeer-review

34 Scopus citations

Abstract

For a robot to perform complex manipulation tasks, such as an in-hand manipulation, knowledge about the state of the grasp is required at all times. Moreover, even simple pick-and-place tasks may fail because unexpected motions of the object during the grasp are not accounted for. This letter proposes an approach that estimates the grasp state by combining finger measurements, i.e., joint positions and torques, with visual features that are extracted from monocular camera images. The different sensor modalities are fused using an extended Kalman filter. While the finger measurements allow to detect contacts and resolve collisions between the fingers and the estimated object, the visual features are used to align the object with the camera view. Experiments with the DLR robot David demonstrate the wide range of objects and manipulation scenarios that the method can be applied to. They also provide an insight into the strengths and limitations of the different complementary types of measurements.

Original languageEnglish
Pages (from-to)3497-3504
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume3
Issue number4
DOIs
StatePublished - Oct 2018

Keywords

  • Perception for grasping and manipulation
  • dexterous manipulation
  • sensor fusion

Fingerprint

Dive into the research topics of 'Fusing joint measurements and visual features for In-Hand object pose estimation'. Together they form a unique fingerprint.

Cite this