Abstract
The lack of large-scale real datasets with annotations makes transfer learning a necessity for video activity understanding. We aim to develop an effective method for few-shot transfer learning for first-person action classification. We leverage independently trained local visual cues to learn representations that can be transferred from a source domain, which provides primitive action labels, to a different target domain - using only a handful of examples. Visual cues we employ include object-object interactions, hand grasps and motion within regions that are a function of hand locations. We employ a framework based on meta-learning to extract the distinctive and domain invariant components of the deployed visual cues. This enables transfer of action classification models across public datasets captured with diverse scene and action configurations. We present comparative results of our transfer learning methodology and report superior results over state-of-the-art action classification approaches for both inter-class and inter-dataset transfer.
Original language | English |
---|---|
Pages (from-to) | 6659-6673 |
Number of pages | 15 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 45 |
Issue number | 6 |
DOIs | |
State | Published - 1 Jun 2023 |
Keywords
- Meta learning
- action recognition
- attention
- few shot learning