Blind justice: Fairness with encrypted sensitive attributes

Niki Kilbertus, Adrià Gascón, Matt Kusner, Michael Veale, Krishna P. Gummadi, Adrian Weiler

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

28 Scopus citations

Abstract

Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined-e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcomebased fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes.

Original languageEnglish
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsJennifer Dy, Andreas Krause
PublisherInternational Machine Learning Society (IMLS)
Pages4123-4137
Number of pages15
ISBN (Electronic)9781510867963
StatePublished - 2018
Externally publishedYes
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Publication series

Name35th International Conference on Machine Learning, ICML 2018
Volume6

Conference

Conference35th International Conference on Machine Learning, ICML 2018
Country/TerritorySweden
CityStockholm
Period10/07/1815/07/18

Fingerprint

Dive into the research topics of 'Blind justice: Fairness with encrypted sensitive attributes'. Together they form a unique fingerprint.

Cite this