CONTRASTIVE LEARNING WITH HARD NEGATIVE SAMPLES

Joshua Robinson, Ching Yao Chuang, Suvrit Sra, Stefanie Jegelka

Research output: Contribution to conferencePaperpeer-review

331 Scopus citations

Abstract

How can you sample good negative examples for contrastive learning? We argue that, as with metric learning, contrastive learning of representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use true similarity information. In response, we develop a new family of unsupervised sampling methods for selecting hard negative samples where the user can control the hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.

Original languageEnglish
StatePublished - 2021
Externally publishedYes
Event9th International Conference on Learning Representations, ICLR 2021 - Virtual, Online
Duration: 3 May 20217 May 2021

Conference

Conference9th International Conference on Learning Representations, ICLR 2021
CityVirtual, Online
Period3/05/217/05/21

Fingerprint

Dive into the research topics of 'CONTRASTIVE LEARNING WITH HARD NEGATIVE SAMPLES'. Together they form a unique fingerprint.

Cite this