Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes
full text: | |
author/s: | Manh Khoi Duong, Stefan Conrad |
type: | Inproceedings |
editor: | Diana Benavides Prado, Sarah Monazam Erfani, Philippe Fournier-Viger, Yee Ling Boo, Yun Sing Koh |
booktitle: | The 21th Australasian Data Science and Machine Learning Conference (AusDM’23) |
publisher: | Springer Nature |
address: | Auckland, New Zealand |
month: | December |
year: | 2023 |
The reason behind the unfair outcomes of AI is often rooted in biased datasets. Therefore, this work presents a framework for addressing fairness by debiasing datasets containing a (non-)binary protected attribute. The framework proposes a combinatorial optimization problem where heuristics such as genetic algorithms can be used to solve for the stated fairness objectives. The framework addresses this by finding a data subset that minimizes a certain discrimination measure. Depending on a user-defined setting, the framework enables different use cases, such as data removal, the addition of synthetic data, or exclusive use of synthetic data. The exclusive use of synthetic data in particular enhances the framework’s ability to preserve privacy while optimizing for fairness. In a comprehensive evaluation, we demonstrate that under our framework, genetic algorithms can effectively yield fairer datasets compared to the original data. In contrast to prior work, the framework exhibits a high degree of flexibility as it is metric- and task-agnostic, can be applied to both binary or non-binary protected attributes, and demonstrates efficient runtime.