Assessing student errors in experimentation using artificial intelligence and large language models: A comparative study with human raters

Arne Bewersdorff, Kathrin Seßler, Armin Baur, Enkelejda Kasneci, Claudia Nerdel

Research output: Contribution to journalArticlepeer-review

31 Scopus citations

Abstract

Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students’ experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate the potential of Large Language Models (LLMs) for automatically identifying student errors and streamlining teacher assessments. Our aim is to provide a foundation for productive, personalized feedback. Using a dataset of 65 student protocols, an Artificial Intelligence (AI) system based on the GPT-3.5 and GPT-4 series was developed and tested against human raters. Our results indicate varying levels of accuracy in error detection between the AI system and human raters. The AI system can accurately identify many fundamental student errors, for instance, the AI system identifies when a student is focusing the hypothesis not on the dependent variable but solely on an expected observation (acc. = 0.90), when a student modifies the trials in an ongoing investigation (acc. = 1), and whether a student is conducting valid test trials (acc. = 0.82) reliably. The identification of other, usually more complex errors, like whether a student conducts a valid control trial (acc. = 0.60), poses a greater challenge. This research explores not only the utility of AI in educational settings, but also contributes to the understanding of the capabilities of LLMs in error detection in inquiry-based learning like experimentation.

Original languageEnglish
Article number100177
JournalComputers and Education: Artificial Intelligence
Volume5
DOIs
StatePublished - Jan 2023

Keywords

  • Artificial intelligence
  • Experimentation
  • Formative assessment
  • Large language models
  • Science education
  • Scientific inquiry
  • Student errors

Fingerprint

Dive into the research topics of 'Assessing student errors in experimentation using artificial intelligence and large language models: A comparative study with human raters'. Together they form a unique fingerprint.

Cite this