Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements

Umm E. Habiba, Justus Bogner, Stefan Wagner

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

With the recent proliferation of artificial intelligence systems, there has been a surge in the demand for explainability of these systems. Explanations help to reduce system opacity, support transparency, and increase stakeholder trust. In this position paper, we discuss synergies between requirements engineering (RE) and Explainable AI (XAI). We highlight challenges in the field of XAI, and propose a framework and research directions on how RE practices can help to mitigate these challenges.

Original languageEnglish
Title of host publicationProceedings - 30th IEEE International Requirements Engineering Conference Workshops, REW 2022
EditorsEric Knauss, Gunter Mussbacher, Chetan Arora, Muneera Bano, Jean-Guy Schneider
PublisherIEEE Computer Society
Pages162-165
Number of pages4
ISBN (Electronic)9781665460002
DOIs
StatePublished - 2022
Externally publishedYes
Event30th IEEE International Requirements Engineering Conference Workshops, REW 2022 - Virtual, Online, Australia
Duration: 15 Aug 202219 Aug 2022

Publication series

NameProceedings of the IEEE International Conference on Requirements Engineering
ISSN (Print)1090-705X
ISSN (Electronic)2332-6441

Conference

Conference30th IEEE International Requirements Engineering Conference Workshops, REW 2022
Country/TerritoryAustralia
CityVirtual, Online
Period15/08/2219/08/22

Keywords

  • Explainability
  • Requirements Engineering
  • XAI

Fingerprint

Dive into the research topics of 'Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements'. Together they form a unique fingerprint.

Cite this