Introspective classification for robot perception

Hugo Grimmett, Rudolph Triebel, Rohan Paul, Ingmar Posner

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

In robotics, the use of a classification framework which produces scores with inappropriate confidences will ultimately lead to the robot making dangerous decisions. In order to select a framework which will make the best decisions, we should pay careful attention to the ways in which it generates scores. Precision and recall have been widely adopted as canonical metrics to quantify the performance of learning algorithms, but for robotics applications involving mission-critical decision making, good performance in relation to these metrics is insufficient. We introduce and motivate the importance of a classifier's introspective capacity: the ability to associate an appropriate assessment of confidence with any test case. We propose that a key ingredient for introspection is a framework's potential to increase its uncertainty with the distance between a test datum its training data. We compare the introspective capacities of a number of commonly used classification frameworks in both classification and detection tasks, and show that better introspection leads to improved decision making in the context of tasks such as autonomous driving or semantic map generation.

Original languageEnglish
Pages (from-to)743-762
Number of pages20
JournalInternational Journal of Robotics Research
Volume35
Issue number7
DOIs
StatePublished - 1 Jun 2016

Keywords

  • Robotics
  • classification
  • decisions
  • introspection
  • perception
  • uncertainty

Fingerprint

Dive into the research topics of 'Introspective classification for robot perception'. Together they form a unique fingerprint.

Cite this