Learning generative models across incomparable spaces

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

44 Zitate (Scopus)

Abstract

Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. However, in some cases, we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). In this work, we propose an approach to learn generative models across such incomparable spaces, and demonstrate how to steer the learned distribution towards target properties. A key component of our model is the Gromov-Wasserstein distance, a notion of discrepancy that compares distributions relationally rather than absolutely. While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain learning.

OriginalspracheEnglisch
Titel36th International Conference on Machine Learning, ICML 2019
Herausgeber (Verlag)International Machine Learning Society (IMLS)
Seiten1374-1389
Seitenumfang16
ISBN (elektronisch)9781510886988
PublikationsstatusVeröffentlicht - 2019
Extern publiziertJa
Veranstaltung36th International Conference on Machine Learning, ICML 2019 - Long Beach, USA/Vereinigte Staaten
Dauer: 9 Juni 201915 Juni 2019

Publikationsreihe

Name36th International Conference on Machine Learning, ICML 2019
Band2019-June

Konferenz

Konferenz36th International Conference on Machine Learning, ICML 2019
Land/GebietUSA/Vereinigte Staaten
OrtLong Beach
Zeitraum9/06/1915/06/19

Fingerprint

Untersuchen Sie die Forschungsthemen von „Learning generative models across incomparable spaces“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren