TY - GEN
T1 - Person independent, privacy preserving, and real time assessment of cognitive load using eye tracking in a virtual reality setup
AU - Bozkir, Efe
AU - Geisler, David
AU - Kasneci, Enkelejda
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - Eye tracking is handled as key enabling technology to VR and AR for multiple reasons, since it not only can help to massively reduce computational costs through gaze-based optimization of graphics and rendering, but also offers a unique opportunity to design gaze-based personalized interfaces and applications. Additionally, the analysis of eye tracking data allows to assess the cognitive load, intentions and actions of the user. In this work, we propose a person-independent, privacy-preserving and gaze-based cognitive load recognition scheme for drivers under critical situations based on previously collected driving data from a driving experiment in VR including a safety critical situation. Based on carefully annotated ground-truth information, we used pupillary information and performance measures (inputs on accelerator, brake, and steering wheel) to train multiple classifiers with the aim of assessing the cognitive load of the driver. Our results show that incorporating eye tracking data into the VR setup allows to predict the cognitive load of the user at a high accuracy above 80%. Beyond the specific setup, the proposed framework can be used in any adaptive and intelligent VR/AR application.
AB - Eye tracking is handled as key enabling technology to VR and AR for multiple reasons, since it not only can help to massively reduce computational costs through gaze-based optimization of graphics and rendering, but also offers a unique opportunity to design gaze-based personalized interfaces and applications. Additionally, the analysis of eye tracking data allows to assess the cognitive load, intentions and actions of the user. In this work, we propose a person-independent, privacy-preserving and gaze-based cognitive load recognition scheme for drivers under critical situations based on previously collected driving data from a driving experiment in VR including a safety critical situation. Based on carefully annotated ground-truth information, we used pupillary information and performance measures (inputs on accelerator, brake, and steering wheel) to train multiple classifiers with the aim of assessing the cognitive load of the driver. Our results show that incorporating eye tracking data into the VR setup allows to predict the cognitive load of the user at a high accuracy above 80%. Beyond the specific setup, the proposed framework can be used in any adaptive and intelligent VR/AR application.
KW - Cognitive load recognition
KW - Driving simulation
KW - Eye tracking
KW - Virtual reality
UR - http://www.scopus.com/inward/record.url?scp=85071843887&partnerID=8YFLogxK
U2 - 10.1109/VR.2019.8797758
DO - 10.1109/VR.2019.8797758
M3 - Conference contribution
AN - SCOPUS:85071843887
T3 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
SP - 1834
EP - 1837
BT - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019
Y2 - 23 March 2019 through 27 March 2019
ER -