Domain-Specific Priors and Meta Learning for Few-Shot First-Person Action Recognition

Huseyin Coskun, M. Zeeshan Zia, Bugra Tekin, Federica Bogo, Nassir Navab, Federico Tombari, Harpreet S. Sawhney

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

The lack of large-scale real datasets with annotations makes transfer learning a necessity for video activity understanding. We aim to develop an effective method for few-shot transfer learning for first-person action classification. We leverage independently trained local visual cues to learn representations that can be transferred from a source domain, which provides primitive action labels, to a different target domain - using only a handful of examples. Visual cues we employ include object-object interactions, hand grasps and motion within regions that are a function of hand locations. We employ a framework based on meta-learning to extract the distinctive and domain invariant components of the deployed visual cues. This enables transfer of action classification models across public datasets captured with diverse scene and action configurations. We present comparative results of our transfer learning methodology and report superior results over state-of-the-art action classification approaches for both inter-class and inter-dataset transfer.

Original languageEnglish
Pages (from-to)6659-6673
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number6
DOIs
StatePublished - 1 Jun 2023

Keywords

  • Meta learning
  • action recognition
  • attention
  • few shot learning

Fingerprint

Dive into the research topics of 'Domain-Specific Priors and Meta Learning for Few-Shot First-Person Action Recognition'. Together they form a unique fingerprint.

Cite this