Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections

Zhenzhang Ye, Tarun Yenamandra, Florian Bernard, Daniel Cremers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Graph matching aims to establish correspondences between vertices of graphs such that both the node and edge attributes agree. Various learning-based methods were recently proposed for finding correspondences between image key points based on deep graph matching formulations. While these approaches mainly focus on learning node and edge attributes, they completely ignore the 3D geometry of the underlying 3D objects depicted in the 2D images. We fill this gap by proposing a trainable framework that takes advantage of graph neural networks for learning a deformable 3D geometry model from inhomogeneous image collections, i.e., a set of images that depict different instances of objects from the same category. Experimentally, we demonstrate that our method outperforms recent learning-based approaches for graph matching considering both accuracy and cycle-consistency error, while we in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.

Original languageEnglish
Title of host publicationAAAI-22 Technical Tracks 3
PublisherAssociation for the Advancement of Artificial Intelligence
Pages3125-3133
Number of pages9
ISBN (Electronic)1577358767, 9781577358763
DOIs
StatePublished - 30 Jun 2022
Event36th AAAI Conference on Artificial Intelligence, AAAI 2022 - Virtual, Online
Duration: 22 Feb 20221 Mar 2022

Publication series

NameProceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
Volume36

Conference

Conference36th AAAI Conference on Artificial Intelligence, AAAI 2022
CityVirtual, Online
Period22/02/221/03/22

Fingerprint

Dive into the research topics of 'Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections'. Together they form a unique fingerprint.

Cite this