Articulated object modeling based on visual and haptic observations

Wei Wang, Vasiliki Koropouli, Dongheui Lee, Kolja Kühnlenz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Manipulation of articulated objects constitutes an important and hard challenge for robots. This paper proposes an approach to model articulated objects by integrating visual and haptic information. Line-shaped skeletonization based on depth image data is realized to extract the skeleton of an object given different configurations. Using observations of the extracted object's skeleton topology, the kinematic joints of the object are characterized and localized. Haptic data in the form of task-space force required to manipulate the object, are collected by kinesthetic teaching and learned by Gaussian Mixture Regression in object joint state space. Following modeling, manipulation of the object is realized by first identifying the current object joint states from visual observations and second generalizing learned force to accomplish the new task.

Original languageEnglish
Title of host publicationVISAPP 2013 - Proceedings of the International Conference on Computer Vision Theory and Applications
Pages253-259
Number of pages7
StatePublished - 2013
Externally publishedYes
Event8th International Conference on Computer Vision Theory and Applications, VISAPP 2013 - Barcelona, Spain
Duration: 21 Feb 201324 Feb 2013

Publication series

NameVISAPP 2013 - Proceedings of the International Conference on Computer Vision Theory and Applications
Volume2

Conference

Conference8th International Conference on Computer Vision Theory and Applications, VISAPP 2013
Country/TerritorySpain
CityBarcelona
Period21/02/1324/02/13

Keywords

  • Articulated object modeling
  • Object skeletonization
  • Vision-based articulated object manipulation

Fingerprint

Dive into the research topics of 'Articulated object modeling based on visual and haptic observations'. Together they form a unique fingerprint.

Cite this