TY - GEN
T1 - PoLLMgraph
T2 - 2024 Findings of the Association for Computational Linguistics: NAACL 2024
AU - Zhu, Derui
AU - Chen, Dingfan
AU - Li, Qing
AU - Chen, Zongxiong
AU - Ma, Lei
AU - Grossklags, Jens
AU - Fritz, Mario
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Despite tremendous advancements in large language models (LLMs) over recent years, a notably urgent challenge for their practical deployment is the phenomenon of “hallucination”, where the model fabricates facts and produces non-factual statements. In response, we propose PoLLMgraph-a Polygraph for LLMs-as an effective model-based white-box detection and forecasting approach. PoLLMgraph distinctly differs from the large body of existing research that concentrates on addressing such challenges through black-box evaluations. In particular, we demonstrate that hallucination can be effectively detected by analyzing the LLM's internal state transition dynamics during generation via tractable probabilistic models. Experimental results on various open-source LLMs confirm the efficacy of PoLLMgraph, outperforming state-of-the-art methods by a considerable margin, evidenced by over 20% improvement in AUCROC on common benchmarking datasets like TruthfulQA. Our work paves a new way for model-based white-box analysis of LLMs, motivating the research community to further explore, understand, and refine the intricate dynamics of LLM behaviors.
AB - Despite tremendous advancements in large language models (LLMs) over recent years, a notably urgent challenge for their practical deployment is the phenomenon of “hallucination”, where the model fabricates facts and produces non-factual statements. In response, we propose PoLLMgraph-a Polygraph for LLMs-as an effective model-based white-box detection and forecasting approach. PoLLMgraph distinctly differs from the large body of existing research that concentrates on addressing such challenges through black-box evaluations. In particular, we demonstrate that hallucination can be effectively detected by analyzing the LLM's internal state transition dynamics during generation via tractable probabilistic models. Experimental results on various open-source LLMs confirm the efficacy of PoLLMgraph, outperforming state-of-the-art methods by a considerable margin, evidenced by over 20% improvement in AUCROC on common benchmarking datasets like TruthfulQA. Our work paves a new way for model-based white-box analysis of LLMs, motivating the research community to further explore, understand, and refine the intricate dynamics of LLM behaviors.
UR - http://www.scopus.com/inward/record.url?scp=85197645498&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85197645498
T3 - Findings of the Association for Computational Linguistics: NAACL 2024 - Findings
SP - 4737
EP - 4751
BT - Findings of the Association for Computational Linguistics
A2 - Duh, Kevin
A2 - Gomez, Helena
A2 - Bethard, Steven
PB - Association for Computational Linguistics (ACL)
Y2 - 16 June 2024 through 21 June 2024
ER -