Max-Margin Contrastive Learning

Anshul Shah, Suvrit Sra, Rama Chellappa, Anoop Cherian

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

29 Zitate (Scopus)

Abstract

Standard contrastive learning approaches usually require a large number of negatives for effective unsupervised learning and often exhibit slow convergence. We suspect this behavior is due to the suboptimal selection of negatives used for offering contrast to the positives. We counter this difficulty by taking inspiration from support vector machines (SVMs) to present max-margin contrastive learning (MMCL). Our approach selects negatives as the sparse support vectors obtained via a quadratic optimization problem, and contrastiveness is enforced by maximizing the decision margin. As SVM optimization can be computationally demanding, especially in an end-to-end setting, we present simplifications that alleviate the computational burden. We validate our approach on standard vision benchmark datasets, demonstrating better performance in unsupervised representation learning over state-of-the-art, while having better empirical convergence properties.

OriginalspracheEnglisch
TitelAAAI-22 Technical Tracks 8
Herausgeber (Verlag)Association for the Advancement of Artificial Intelligence
Seiten8220-8230
Seitenumfang11
ISBN (elektronisch)1577358767, 9781577358763
DOIs
PublikationsstatusVeröffentlicht - 30 Juni 2022
Extern publiziertJa
Veranstaltung36th AAAI Conference on Artificial Intelligence, AAAI 2022 - Virtual, Online
Dauer: 22 Feb. 20221 März 2022

Publikationsreihe

NameProceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
Band36

Konferenz

Konferenz36th AAAI Conference on Artificial Intelligence, AAAI 2022
OrtVirtual, Online
Zeitraum22/02/221/03/22

Fingerprint

Untersuchen Sie die Forschungsthemen von „Max-Margin Contrastive Learning“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren