Automated Hand-Raising Detection in Classroom Videos: A View-Invariant and Occlusion-Robust Machine Learning Approach

Babette Bühler, Ruikun Hou, Efe Bozkir, Patricia Goldberg, Peter Gerjets, Ulrich Trautwein, Enkelejda Kasneci

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

Hand-raising signals students’ willingness to participate actively in the classroom discourse. It has been linked to academic achievement and cognitive engagement of students and constitutes an observable indicator of behavioral engagement. However, due to the large amount of effort involved in manual hand-raising annotation by human observers, research on this phenomenon, enabling teachers to understand and foster active classroom participation, is still scarce. An automated detection approach of hand-raising events in classroom videos can offer a time- and cost-effective substitute for manual coding. From a technical perspective, the main challenges for automated detection in the classroom setting are diverse camera angles and student occlusions. In this work, we propose utilizing and further extending a novel view-invariant, occlusion-robust machine learning approach with long short-term memory networks for hand-raising detection in classroom videos based on body pose estimation. We employed a dataset stemming from 36 real-world classroom videos, capturing 127 students from grades 5 to 12 and 2442 manually annotated authentic hand-raising events. Our temporal model trained on body pose embeddings achieved an F1 score of 0.76. When employing this approach for the automated annotation of hand-raising instances, a mean absolute error of 3.76 for the number of detected hand-raisings per student, per lesson was achieved. We demonstrate its application by investigating the relationship between hand-raising events and self-reported cognitive engagement, situational interest, and involvement using manually annotated and automatically detected hand-raising instances. Furthermore, we discuss the potential of our approach to enable future large-scale research on student participation, as well as privacy-preserving data collection in the classroom context.

Original languageEnglish
Title of host publicationArtificial Intelligence in Education - 24th International Conference, AIED 2023, Proceedings
EditorsNing Wang, Genaro Rebolledo-Mendez, Noboru Matsuda, Olga C. Santos, Vania Dimitrova
PublisherSpringer Science and Business Media Deutschland GmbH
Pages102-113
Number of pages12
ISBN (Print)9783031362712
DOIs
StatePublished - 2023
Event24th International Conference on Artificial Intelligence in Education, AIED 2023 - Tokyo, Japan
Duration: 3 Jul 20237 Jul 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13916 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference24th International Conference on Artificial Intelligence in Education, AIED 2023
Country/TerritoryJapan
CityTokyo
Period3/07/237/07/23

Keywords

  • AI in Education
  • Educational Technologies
  • Hand-raising detection
  • Student Engagement

Fingerprint

Dive into the research topics of 'Automated Hand-Raising Detection in Classroom Videos: A View-Invariant and Occlusion-Robust Machine Learning Approach'. Together they form a unique fingerprint.

Cite this