TY - JOUR
T1 - Assessing student errors in experimentation using artificial intelligence and large language models
T2 - A comparative study with human raters
AU - Bewersdorff, Arne
AU - Seßler, Kathrin
AU - Baur, Armin
AU - Kasneci, Enkelejda
AU - Nerdel, Claudia
N1 - Publisher Copyright:
© 2023 The Authors
PY - 2023/1
Y1 - 2023/1
N2 - Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students’ experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate the potential of Large Language Models (LLMs) for automatically identifying student errors and streamlining teacher assessments. Our aim is to provide a foundation for productive, personalized feedback. Using a dataset of 65 student protocols, an Artificial Intelligence (AI) system based on the GPT-3.5 and GPT-4 series was developed and tested against human raters. Our results indicate varying levels of accuracy in error detection between the AI system and human raters. The AI system can accurately identify many fundamental student errors, for instance, the AI system identifies when a student is focusing the hypothesis not on the dependent variable but solely on an expected observation (acc. = 0.90), when a student modifies the trials in an ongoing investigation (acc. = 1), and whether a student is conducting valid test trials (acc. = 0.82) reliably. The identification of other, usually more complex errors, like whether a student conducts a valid control trial (acc. = 0.60), poses a greater challenge. This research explores not only the utility of AI in educational settings, but also contributes to the understanding of the capabilities of LLMs in error detection in inquiry-based learning like experimentation.
AB - Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students’ experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate the potential of Large Language Models (LLMs) for automatically identifying student errors and streamlining teacher assessments. Our aim is to provide a foundation for productive, personalized feedback. Using a dataset of 65 student protocols, an Artificial Intelligence (AI) system based on the GPT-3.5 and GPT-4 series was developed and tested against human raters. Our results indicate varying levels of accuracy in error detection between the AI system and human raters. The AI system can accurately identify many fundamental student errors, for instance, the AI system identifies when a student is focusing the hypothesis not on the dependent variable but solely on an expected observation (acc. = 0.90), when a student modifies the trials in an ongoing investigation (acc. = 1), and whether a student is conducting valid test trials (acc. = 0.82) reliably. The identification of other, usually more complex errors, like whether a student conducts a valid control trial (acc. = 0.60), poses a greater challenge. This research explores not only the utility of AI in educational settings, but also contributes to the understanding of the capabilities of LLMs in error detection in inquiry-based learning like experimentation.
KW - Artificial intelligence
KW - Experimentation
KW - Formative assessment
KW - Large language models
KW - Science education
KW - Scientific inquiry
KW - Student errors
UR - http://www.scopus.com/inward/record.url?scp=85174600919&partnerID=8YFLogxK
U2 - 10.1016/j.caeai.2023.100177
DO - 10.1016/j.caeai.2023.100177
M3 - Article
AN - SCOPUS:85174600919
SN - 2666-920X
VL - 5
JO - Computers and Education: Artificial Intelligence
JF - Computers and Education: Artificial Intelligence
M1 - 100177
ER -