Critical Thinking Assessment in Higher Education: A Mixed-Methods Comparative Analysis of AI and Human Evaluator

Research output: Contribution to journalArticlepeer-review

Abstract

This article investigates the potential of using Artificial Intelligence (AI) to assess students’ critical thinking skills in higher education. With the growing adoption of AI technologies in educational assessment, there are prospects for streamlining evaluation processes. However, integrating AI in critical thinking assessment remains underexplored. To address this gap, we compare the grading of an educator with that generated by ChatGPT on a critical thinking test for university students. We employ a mixed-methods approach: (a) a quantitative comparison of scores and (b) a thematic analysis to explore the rationale behind the scores. The findings suggest that while AI offers broader contextual feedback, human evaluators provide precision and adherence to grading rubrics, and that universities should consider a hybrid, human and AI evaluation approach. This study contributes to the discourse on how to integrate AI into assessment practices in higher education while addressing issues of transparency and interpretability.

Original languageEnglish
JournalInternational Journal of Human-Computer Interaction
DOIs
StateAccepted/In press - 2025

Keywords

  • Artificial intelligence in education (AIEd)
  • ChatGPT
  • critical thinking
  • higher education assessment

Fingerprint

Dive into the research topics of 'Critical Thinking Assessment in Higher Education: A Mixed-Methods Comparative Analysis of AI and Human Evaluator'. Together they form a unique fingerprint.

Cite this