PiGraphs: Learning interaction snapshots from observations

Manolis Savva, Angel X. Chang, Pat Hanrahan, Matthew Fisher, Matthias Nießner

Research output: Contribution to journalConference articlepeer-review

99 Scopus citations


We learn a probabilistic model connecting human poses and arrangements of object geometry from real-world observations of interactions collected with commodity RGB-D sensors. This model is encoded as a set of prototypical interaction graphs (PiGraphs), a human-centric representation capturing physical contact and visual attention linkages between 3D geometry and human body parts. We use this encoding of the joint probability distribution over pose and geometry during everyday interactions to generate interaction snapshots, which are static depictions of human poses and relevant objects during human-object interactions. We demonstrate that our model enables a novel human-centric understanding of 3D content and allows for jointly generating 3D scenes and interaction poses given terse high-level specifications, natural language, or reconstructed real-world scene constraints.

Original languageEnglish
Article numbera139
JournalACM Transactions on Graphics
Issue number4
StatePublished - 11 Jul 2016
Externally publishedYes
EventACM SIGGRAPH 2016 - Anaheim, United States
Duration: 24 Jul 201628 Jul 2016


  • 3D content generation
  • Human pose modeling
  • Object semantics
  • Person-object interactions


Dive into the research topics of 'PiGraphs: Learning interaction snapshots from observations'. Together they form a unique fingerprint.

Cite this