A Weakly Supervised Semi-Automatic Image Labeling Approach for Deformable Linear Objects

Alessio Caporali, Matteo Pantano, Lucas Janisch, Daniel Regulin, Gianluca Palli, Dongheui Lee

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

The presence of Deformable Linear Objects (DLOs) such as wires, cables or ropes in our everyday life is massive. However, the applicability of robotic solutions to DLOs is still marginal due to the many challenges involved in their perception. In this letter, a methodology to generate datasets from a mixture of synthetic and real samples for the training of DLOs segmentation approaches is thus presented. The method is composed of two steps. First, key-points along a real-world DLO are labeled by employing a VR tracker operated by a user. Second, synthetic and real-world datasets are mixed for the training of semantic and instance segmentation deep learning algorithms to study the benefit of real-world data in DLOs segmentation. To validate this method a user study and a parameter study are conducted. The results show that the VR tracker labeling is usable as other labeling techniques but reduces the number of clicks. Moreover, mixing real-world and synthetic DLOs data can improve the IoU score of a semantic segmentation algorithm by circa 5%. Therefore, this work demonstrates that labeling real-world data via a VR tracker can be done quickly and, if the real-world data are mixed with synthetic data, the performances of segmentation algorithms for DLOs can be improved.

Original languageEnglish
Pages (from-to)1013-1020
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume8
Issue number2
DOIs
StatePublished - 1 Feb 2023
Externally publishedYes

Keywords

  • Deformable linear objects
  • dataset generation
  • image segmentation
  • spatial labeling
  • usability

Fingerprint

Dive into the research topics of 'A Weakly Supervised Semi-Automatic Image Labeling Approach for Deformable Linear Objects'. Together they form a unique fingerprint.

Cite this