TY - JOUR
T1 - U-HAR
AU - Meyer, Johannes
AU - Frank, Adrian
AU - Schlebusch, Thomas
AU - Kasneci, Enkelejda
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/5
Y1 - 2022/5
N2 - After the success of smartphones and smartwatches, smart glasses are expected to be the next smart wearable. While novel display technology allows the seamlessly embedding of content into the FOV, interaction methods with glasses, requiring the user for active interaction, limiting the user experience. One way to improve this and drive immersive augmentation is to reduce user interactions to a necessary minimum by adding context awareness to smart glasses. For this, we propose an approach based on human activity recognition, which incorporates features, derived from the user's head- and eye-movement. Towards this goal, we combine an commercial eye-tracker and an IMU to capture eye- and head-movement features of 7 activities performed by 20 participants. From a methodological perspective, we introduce U-HAR, a convolutional network optimized for activity recognition. By applying a few-shot learning, our model reaches an macro-F1-score of 86.59%, allowing us to derive contextual information.
AB - After the success of smartphones and smartwatches, smart glasses are expected to be the next smart wearable. While novel display technology allows the seamlessly embedding of content into the FOV, interaction methods with glasses, requiring the user for active interaction, limiting the user experience. One way to improve this and drive immersive augmentation is to reduce user interactions to a necessary minimum by adding context awareness to smart glasses. For this, we propose an approach based on human activity recognition, which incorporates features, derived from the user's head- and eye-movement. Towards this goal, we combine an commercial eye-tracker and an IMU to capture eye- and head-movement features of 7 activities performed by 20 participants. From a methodological perspective, we introduce U-HAR, a convolutional network optimized for activity recognition. By applying a few-shot learning, our model reaches an macro-F1-score of 86.59%, allowing us to derive contextual information.
KW - context awareness
KW - head and eye movements
KW - human activity recognition
KW - smart glasses
KW - ubiquitous computing
UR - http://www.scopus.com/inward/record.url?scp=85130503672&partnerID=8YFLogxK
U2 - 10.1145/3530884
DO - 10.1145/3530884
M3 - Article
AN - SCOPUS:85130503672
SN - 2573-0142
VL - 6
JO - Proceedings of the ACM on Human-Computer Interaction
JF - Proceedings of the ACM on Human-Computer Interaction
IS - ETRA
M1 - 143
ER -