Explainability Tool for NLP-based AI Systems

Project: Research

Project Details

Description

This project investigates the current state of Explainable AI for NLP-based AI systems in academic literature and practical applications. Its goal is to assist various stakeholders in understanding the decisions and predictions made by such systems based on their needs. For developers, this could be debugging a system, while for end users, it could be explaining the decisions to have trust in using it. The project aims to build an interactive framework where different explainability methods are identified and implemented to provide different explanations that could complement each other in delivering fine-grained explanations and analysis of the NP-based AI systems’ decisions and behavior for various stakeholders.
AcronymXplainNLP
StatusFinished
Effective start/end date1/02/24 → 31/01/26

Collaborative partners

  • Software AG (lead)

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.