Evaluating Large Language Models in Semantic Parsing for Conversational Question Answering over Knowledge Graphs

Phillip Schneider, Manuel Klettner, Kristiina Jokinen, Elena Simperl, Florian Matthes

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

3 Zitate (Scopus)

Abstract

Conversational question answering systems often rely on semantic parsing to enable interactive information retrieval, which involves the generation of structured database queries from a natural language input. For information-seeking conversations about facts stored within a knowledge graph, dialogue utterances are transformed into graph queries in a process that is called knowledge-based conversational question answering. This paper evaluates the performance of large language models that have not been explicitly pre-trained on this task. Through a series of experiments on an extensive benchmark dataset, we compare models of varying sizes with different prompting techniques and identify common issue types in the generated output. Our results demonstrate that large language models are capable of generating graph queries from dialogues, with significant improvements achievable through few-shot prompting and fine-tuning techniques, especially for smaller models that exhibit lower zero-shot performance.

OriginalspracheEnglisch
Seiten (von - bis)807-814
Seitenumfang8
FachzeitschriftInternational Conference on Agents and Artificial Intelligence
Jahrgang3
DOIs
PublikationsstatusVeröffentlicht - 2024
Veranstaltung16th International Conference on Agents and Artificial Intelligence, ICAART 2024 - Rome, Italien
Dauer: 24 Feb. 202426 Feb. 2024

Fingerprint

Untersuchen Sie die Forschungsthemen von „Evaluating Large Language Models in Semantic Parsing for Conversational Question Answering over Knowledge Graphs“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren