Abstract
Advanced manipulation skills that enable cooperating robots in a work cell to recognise, handle and assemble arbitrarily placed objects require sensory information on both the environment and the assembly process. Using standard approaches it becomes difficult to coordinate sensor usage and data fusion under a given task when the number of sensors becomes large or is changed during run-time. We present a solution to both coordination and fusion based on the multi (sensor) agent paradigm: each agent implements a sensory skill, a negotiation protocol and a physical communication interface to all other agents. Within this sensor-network teams cooperate on a common task. They are formed dynamically after a negotiation phase following a specific task formulation. Our framework consists of a formal specification of the requirements that the sensory skill of an agent has to meet and a comprehensive library of (C++)-objects encapsulating all of the negotiation protocol and communications. This separation makes it very easy to implement individual sensory skills. We show how the abstract concepts of this approach and the metaphor of "negotiation" work in a real-world network: several uncalibrated cameras are used to guide a manipulator towards a target. We also show how agent-teams may easily (self-)reconfigure during task execution in the case of unexpected events. The framework is distributed free of charge and can be obtained over the Internet at http://magic.uni-bielefeld.de.
Original language | English |
---|---|
Pages (from-to) | 18-29 |
Number of pages | 12 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 3523 |
DOIs | |
State | Published - 1998 |
Externally published | Yes |
Event | Sensor Fusion and Decentralized Control in Robotic Systems IV - Boston, MA, United States Duration: 2 Nov 1998 → 3 Nov 1998 |
Keywords
- Contract net
- Distributed sensing
- Multi-agents
- Multi-sensor network
- Sensor-fusion
- Uncalibrated visual servoing