Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers

Tobias Leemann, Alina Fastowski, Felix Pfeiffer, Gjergji Kasneci

Research output: Contribution to journalArticlepeer-review

Abstract

We address the critical challenge of applying feature attribution methods to the transformer architecture, which dominates current applications in natural language processing and beyond. Traditional attribution methods to explainable AI (XAI) explicitly or implicitly rely on linear or additive surrogate models to quantify the impact of input features on a model’s output. In this work, we formally prove an alarming incompatibility: transformers are structurally incapable of representing linear or additive surrogate models used for feature attribution, undermining the grounding of these conventional explanation methodologies. To address this discrepancy, we introduce the Softmax-Linked Additive Log Odds Model (SLALOM), a novel surrogate model specifically designed to align with the transformer framework. SLALOM demonstrates the capacity to deliver a range of insightful explanations with both synthetic and real-world datasets. We highlight SLALOM’s unique efficiency-quality curve by showing that SLALOM can produce explanations with substantially higher fidelity than competing surrogate models or provide explanations of comparable quality at a fraction of their computational costs. We release code for SLALOM as an open-source project online at https://github.com/tleemann/slalom_explanations.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2024
StatePublished - 2024

Fingerprint

Dive into the research topics of 'Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers'. Together they form a unique fingerprint.

Cite this