Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry

Research output: Contribution to journalArticlepeer-review

64 Scopus citations

Abstract

This study explores the cognitive load and learning outcomes associated with using large language models (LLMs) versus traditional search engines for information gathering during learning. A total of 91 university students were randomly assigned to either use ChatGPT3.5 or Google to research the socio-scientific issue of nanoparticles in sunscreen to derive valid recommendations and justifications. The study aimed to investigate potential differences in cognitive load, as well as the quality and homogeneity of the students' recommendations and justifications. Results indicated that students using LLMs experienced significantly lower cognitive load. However, despite this reduction, these students demonstrated lower-quality reasoning and argumentation in their final recommendations compared to those who used traditional search engines. Further, the homogeneity of the recommendations and justifications did not differ significantly between the two groups, suggesting that LLMs did not restrict the diversity of students’ perspectives. These findings highlight the nuanced implications of digital tools on learning, suggesting that while LLMs can decrease the cognitive burden associated with information gathering during a learning task, they may not promote deeper engagement with content necessary for high-quality learning per se.

Original languageEnglish
Article number108386
JournalComputers in Human Behavior
Volume160
DOIs
StatePublished - Nov 2024

Fingerprint

Dive into the research topics of 'Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry'. Together they form a unique fingerprint.

Cite this