TY - GEN
T1 - Adversarial Reweighting Guided by Wasserstein Distance to Achieve Demographic Parity
AU - Zhao, Xuan
AU - Fabbrizzi, Simone
AU - Lobo, Paula Reyero
AU - Ghodsi, Siamak
AU - Broelemann, Klaus
AU - Staab, Steffen
AU - Kasneci, Gjergji
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - To address bias issues, fair machine learning usually jointly optimizes two (or more) metrics aiming at predictive utility and fairness. However, the inherent under-representation of minorities in the data often makes the disparate impact of subpopulations less noticeable and difficult to deal with during learning. In this paper, we propose a novel adversarial reweighting method to address such disparate impact. To balance the data distribution between the majority and the minority groups, our approach prefers samples from the majority group that are closer to the minority group as evaluated by the Wasserstein distance. Theoretical analysis shows the effectiveness of our adversarial reweighting approach. Experiments demonstrate that our approach mitigates disparate impact without sacrificing classification accuracy, outperforming related state-of-the-art methods on image and tabular benchmark datasets. Code is available at https://github.com/zhaoxuan00707/wasserstein-reweight.
AB - To address bias issues, fair machine learning usually jointly optimizes two (or more) metrics aiming at predictive utility and fairness. However, the inherent under-representation of minorities in the data often makes the disparate impact of subpopulations less noticeable and difficult to deal with during learning. In this paper, we propose a novel adversarial reweighting method to address such disparate impact. To balance the data distribution between the majority and the minority groups, our approach prefers samples from the majority group that are closer to the minority group as evaluated by the Wasserstein distance. Theoretical analysis shows the effectiveness of our adversarial reweighting approach. Experiments demonstrate that our approach mitigates disparate impact without sacrificing classification accuracy, outperforming related state-of-the-art methods on image and tabular benchmark datasets. Code is available at https://github.com/zhaoxuan00707/wasserstein-reweight.
KW - Wasserstein distance
KW - adversarial reweight
KW - disparate impact
KW - fairness
KW - under-representation
UR - http://www.scopus.com/inward/record.url?scp=85218012629&partnerID=8YFLogxK
U2 - 10.1109/BigData62323.2024.10825191
DO - 10.1109/BigData62323.2024.10825191
M3 - Conference contribution
AN - SCOPUS:85218012629
T3 - Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024
SP - 1605
EP - 1614
BT - Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024
A2 - Ding, Wei
A2 - Lu, Chang-Tien
A2 - Wang, Fusheng
A2 - Di, Liping
A2 - Wu, Kesheng
A2 - Huan, Jun
A2 - Nambiar, Raghu
A2 - Li, Jundong
A2 - Ilievski, Filip
A2 - Baeza-Yates, Ricardo
A2 - Hu, Xiaohua
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Big Data, BigData 2024
Y2 - 15 December 2024 through 18 December 2024
ER -