Learning determinantal point processes by corrective negative sampling

Zelda Mariet, Mike Gartrell, Suvrit Sra

Research output: Contribution to conferencePaperpeer-review

4 Scopus citations

Abstract

Determinantal Point Processes (DPPs) have attracted significant interest from the machine-learning community due to their ability to elegantly and tractably model the delicate balance between quality and diversity of sets. DPPs are commonly learned from data using maximum likelihood estimation (MLE). While fitting observed sets well, MLE for DPPs may also assign high likelihoods to unobserved sets that are far from the true generative distribution of the data. To address this issue, which reduces the quality of the learned model, we introduce a novel optimization problem, Contrastive Estimation (CE), which encodes information about “negative” samples into the basic learning model. CE is grounded in the successful use of negative information in machine-vision and language modeling. Depending on the chosen negative distribution (which may be static or evolve during optimization), CE assumes two different forms, which we analyze theoretically and experimentally. We evaluate our new model on real-world datasets; on a challenging dataset, CE learning delivers a considerable improvement in predictive performance over a DPP learned without using contrastive information.

Original languageEnglish
StatePublished - 2020
Externally publishedYes
Event22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan
Duration: 16 Apr 201918 Apr 2019

Conference

Conference22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019
Country/TerritoryJapan
CityNaha
Period16/04/1918/04/19

Fingerprint

Dive into the research topics of 'Learning determinantal point processes by corrective negative sampling'. Together they form a unique fingerprint.

Cite this