Abstract
Air quality is relevant to society because it poses environmental risks to humans and nature. We use explainable machine learning in air quality research by analyzing model predictions in relation to the underlying training data. The data originate from worldwide ozone observations, paired with geospatial data. We use two different architectures: a neural network and a random forest trained on various geospatial data to predict multi-year averages of the air pollutant ozone. To understand how both models function, we explain how they represent the training data and derive their predictions. By focusing on inaccurate predictions and explaining why these predictions fail, we can (i) identify underrepresented samples, (ii) flag unexpected inaccurate predictions, and (iii) point to training samples irrelevant for predictions on the test set. Based on the underrepresented samples, we suggest where to build new measurement stations. We also show which training samples do not substantially contribute to the model performance. This study demonstrates the application of explainable machine learning beyond simply explaining the trained model.
Original language | English |
---|---|
Pages (from-to) | 150-171 |
Number of pages | 22 |
Journal | Machine Learning and Knowledge Extraction |
Volume | 4 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2022 |
Keywords
- air quality
- explainable machine learning
- k-nearest neighbors
- neural network
- random forest