Abstract
Employees at large companies tend to have longer waiting times if they need company-specific information and similarly someone on the other end needs to manually address those queries. Most companies are trying to incorporate LLM-powered conversational agents to make this processing faster but often struggle to find appropriate training data, especially domain-specific data. This paper introduces a semi-automatic approach for generating domain-specific training data while leveraging a domain-expert as a human-in-the-loop for quality control. We test this approach on a HR use-case of a large organization through a retrieval-based question-answering pipeline. Additionally, we also test the effect of long context on the performance of the FAQ chat for which we employ LongT5, an Efficient Transformer. Our experiments using LongT5 show that the inclusion of the generated training data improves the performance of the FAQ chatbot during inference.
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 42-49 |
Seitenumfang | 8 |
Fachzeitschrift | International Conference on Agents and Artificial Intelligence |
Jahrgang | 3 |
DOIs | |
Publikationsstatus | Veröffentlicht - 2024 |
Veranstaltung | 16th International Conference on Agents and Artificial Intelligence, ICAART 2024 - Rome, Italien Dauer: 24 Feb. 2024 → 26 Feb. 2024 |