Adaptable Distributed Vision System for Robot Manipulation Tasks

Marko Pavlic, Darius Burschka

Research output: Contribution to journalConference articlepeer-review

Abstract

Existing robotic manipulation systems use stationary depth cameras to observe the workspace, but they are limited by their fixed field of view (FOV), workspace coverage, and depth accuracy. This also limits the performance of robot manipulation tasks, especially in occluded workspace areas or highly cluttered environments where a single view is insufficient. We propose an adaptable distributed vision system for better scene understanding. The system integrates a global RGB-D camera connected to a powerful computer and a monocular camera mounted on an embedded system at the robot’s end-effector. The monocular camera facilitates the exploration and 3D reconstruction of new workspace areas. This configuration provides enhanced flexibility, featuring a dynamic FOV and an extended depth range achievable through the adjustable base length, controlled by the robot’s movements. The reconstruction process can be distributed between the two processing units as needed, allowing for flexibility in system configuration. This work evaluates various configurations regarding reconstruction accuracy, speed, and latency. The results demonstrate that the proposed system achieves precise 3D reconstruction while providing significant advantages for robotic manipulation tasks.

Keywords

  • 3D Reconstruction
  • Features Extraction
  • Optical Flow
  • Scene Understanding
  • Vision for Robotics

Fingerprint

Dive into the research topics of 'Adaptable Distributed Vision System for Robot Manipulation Tasks'. Together they form a unique fingerprint.

Cite this