eXplainable Cooperative Machine Learning with NOVA

Tobias Baur, Alexander Heimerl, Florian Lingenfelser, Johannes Wagner, Michel F. Valstar, Björn Schuller, Elisabeth André

Research output: Contribution to journalArticlepeer-review

31 Scopus citations

Abstract

In the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA. The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.

Original languageEnglish
Pages (from-to)143-164
Number of pages22
JournalKI - Kunstliche Intelligenz
Volume34
Issue number2
DOIs
StatePublished - 1 Jun 2020
Externally publishedYes

Keywords

  • Annotation
  • Cooperative machine learning
  • Explainable AI

Fingerprint

Dive into the research topics of 'eXplainable Cooperative Machine Learning with NOVA'. Together they form a unique fingerprint.

Cite this