Adversarially robust optimization with Gaussian processes

Ilija Bogunovic, Stefanie Jegelka, Jonathan Scarlett, Volkan Cevher

Research output: Contribution to journalConference articlepeer-review

77 Scopus citations

Abstract

In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to remain as high as possible even after this perturbation. This problem is motivated by settings in which the underlying functions during optimization and implementation stages are different, or when one is interested in finding an entire region of good inputs rather than only a single point. We show that standard GP optimization algorithms do not exhibit the desired robustness properties, and provide a novel confidence-bound based algorithm STABLEOPT for this purpose. We rigorously establish the required number of samples for STABLEOPT to find a near-optimal point, and we complement this guarantee with an algorithm-independent lower bound. We experimentally demonstrate several potential applications of interest using real-world data sets, and we show that STABLEOPT consistently succeeds in finding a stable maximizer where several baseline methods fail.

Original languageEnglish
Pages (from-to)5760-5770
Number of pages11
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - 2018
Externally publishedYes
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: 2 Dec 20188 Dec 2018

Fingerprint

Dive into the research topics of 'Adversarially robust optimization with Gaussian processes'. Together they form a unique fingerprint.

Cite this