Explainable Machine Learning Reveals Capabilities, Redundancy, and Limitations of a Geospatial Air Quality Benchmark Dataset

Scarlet Stadtler, Clara Betancourt, Ribana Roscher

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Air quality is relevant to society because it poses environmental risks to humans and nature. We use explainable machine learning in air quality research by analyzing model predictions in relation to the underlying training data. The data originate from worldwide ozone observations, paired with geospatial data. We use two different architectures: a neural network and a random forest trained on various geospatial data to predict multi-year averages of the air pollutant ozone. To understand how both models function, we explain how they represent the training data and derive their predictions. By focusing on inaccurate predictions and explaining why these predictions fail, we can (i) identify underrepresented samples, (ii) flag unexpected inaccurate predictions, and (iii) point to training samples irrelevant for predictions on the test set. Based on the underrepresented samples, we suggest where to build new measurement stations. We also show which training samples do not substantially contribute to the model performance. This study demonstrates the application of explainable machine learning beyond simply explaining the trained model.

Original languageEnglish
Pages (from-to)150-171
Number of pages22
JournalMachine Learning and Knowledge Extraction
Volume4
Issue number1
DOIs
StatePublished - Mar 2022

Keywords

  • air quality
  • explainable machine learning
  • k-nearest neighbors
  • neural network
  • random forest

Fingerprint

Dive into the research topics of 'Explainable Machine Learning Reveals Capabilities, Redundancy, and Limitations of a Geospatial Air Quality Benchmark Dataset'. Together they form a unique fingerprint.

Cite this