TY - GEN
T1 - Understanding Knowledge Drift in LLMs Through Misinformation
AU - Fastowski, Alina
AU - Kasneci, Gjergji
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Large Language Models (LLMs) have revolutionized numerous applications, making them an integral part of our digital ecosystem. However, their reliability becomes critical, especially when these models are exposed to misinformation. This paper primarily analyzes the susceptibility of state-of-the-art LLMs to factual inaccuracies when they encounter false information in a Q&A scenario, an issue that can lead to a phenomenon we refer to as knowledge drift, which significantly undermines the trustworthiness of these models. We evaluate the factuality and the uncertainty of the models’ responses relying on Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that an LLM’s uncertainty can increase up to 56.6% when the question is answered incorrectly due to the exposure to false information. At the same time, repeated exposure to the same false information can decrease the models’ uncertainty again (-52.8% w.r.t. the answers on the untainted prompts), potentially manipulating the underlying model’s beliefs and introducing a drift from its original knowledge. These findings provide insights into LLMs’ robustness and vulnerability to adversarial inputs, paving the way for developing more reliable LLM applications across various domains. The code is available at https://github.com/afastowski/knowledge_drift.
AB - Large Language Models (LLMs) have revolutionized numerous applications, making them an integral part of our digital ecosystem. However, their reliability becomes critical, especially when these models are exposed to misinformation. This paper primarily analyzes the susceptibility of state-of-the-art LLMs to factual inaccuracies when they encounter false information in a Q&A scenario, an issue that can lead to a phenomenon we refer to as knowledge drift, which significantly undermines the trustworthiness of these models. We evaluate the factuality and the uncertainty of the models’ responses relying on Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that an LLM’s uncertainty can increase up to 56.6% when the question is answered incorrectly due to the exposure to false information. At the same time, repeated exposure to the same false information can decrease the models’ uncertainty again (-52.8% w.r.t. the answers on the untainted prompts), potentially manipulating the underlying model’s beliefs and introducing a drift from its original knowledge. These findings provide insights into LLMs’ robustness and vulnerability to adversarial inputs, paving the way for developing more reliable LLM applications across various domains. The code is available at https://github.com/afastowski/knowledge_drift.
KW - Knowledge Drift
KW - Large Language Models
KW - Uncertainty
UR - http://www.scopus.com/inward/record.url?scp=86000460140&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-82346-6_5
DO - 10.1007/978-3-031-82346-6_5
M3 - Conference contribution
AN - SCOPUS:86000460140
SN - 9783031823459
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 74
EP - 85
BT - Discovering Drift Phenomena in Evolving Landscapes - 1st International Workshop, DELTA 2024, Proceedings
A2 - Piangerelli, Marco
A2 - Prenkaj, Bardh
A2 - Rotalinti, Ylenia
A2 - Joshi, Ananya
A2 - Stilo, Giovanni
PB - Springer Science and Business Media Deutschland GmbH
T2 - 1st International Workshop on Discovering Drift Phenomena in Evolving Landscapes, DELTA 2024
Y2 - 26 August 2024 through 26 August 2024
ER -