Explainable Machine Learning for Scientific Insights and Discoveries

Ribana Roscher, Bastian Bohn, Marco F. Duarte, Jochen Garcke

Research output: Contribution to journalArticlepeer-review

680 Scopus citations

Abstract

Machine learning methods have been remarkably successful for a wide range of application areas in the extraction of essential information from data. An exciting and relatively recent development is the uptake of machine learning in the natural sciences, where the major goal is to obtain novel scientific insights and discoveries from observational or simulated data. A prerequisite for obtaining a scientific outcome is domain knowledge, which is needed to gain explainability, but also to enhance scientific consistency. In this article, we review explainable machine learning in view of applications in the natural sciences and discuss three core elements that we identified as relevant in this context: transparency, interpretability, and explainability. With respect to these core elements, we provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge from the application areas.

Original languageEnglish
Article number9007737
Pages (from-to)42200-42216
Number of pages17
JournalIEEE Access
Volume8
DOIs
StatePublished - 2020
Externally publishedYes

Keywords

  • Explainable machine learning
  • informed machine learning
  • interpretability
  • scientific consistency
  • transparency

Fingerprint

Dive into the research topics of 'Explainable Machine Learning for Scientific Insights and Discoveries'. Together they form a unique fingerprint.

Cite this