Abstract
After the success of smartphones and smartwatches, smart glasses are expected to be the next smart wearable. While novel display technology allows the seamlessly embedding of content into the FOV, interaction methods with glasses, requiring the user for active interaction, limiting the user experience. One way to improve this and drive immersive augmentation is to reduce user interactions to a necessary minimum by adding context awareness to smart glasses. For this, we propose an approach based on human activity recognition, which incorporates features, derived from the user's head- and eye-movement. Towards this goal, we combine an commercial eye-tracker and an IMU to capture eye- and head-movement features of 7 activities performed by 20 participants. From a methodological perspective, we introduce U-HAR, a convolutional network optimized for activity recognition. By applying a few-shot learning, our model reaches an macro-F1-score of 86.59%, allowing us to derive contextual information.
Original language | English |
---|---|
Article number | 143 |
Journal | Proceedings of the ACM on Human-Computer Interaction |
Volume | 6 |
Issue number | ETRA |
DOIs | |
State | Published - May 2022 |
Externally published | Yes |
Keywords
- context awareness
- head and eye movements
- human activity recognition
- smart glasses
- ubiquitous computing