(Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers
author/s: | Manh Khoi Duong, Stefan Conrad |
type: | Inproceedings |
booktitle: | ECAI 2024 - 27th European Conference on Artificial Intelligence Santiago de Compostela, Spain |
publisher: | IOS Press |
month: | August |
year: | 2024 |
location: | Santiago de Compostela, Spain |
Fairness metrics are used to assess discrimination and
bias in decision-making processes across various domains, from ma-
chine learning models to real-world applications. This involves cal-
culating the disparities between probabilistic outcomes among social
groups, such as acceptance rates between male and female appli-
cants. However, traditional fairness metrics do not account for the
uncertainty in these processes and lack of comparability when two
decision-makers exhibit the same disparity. Using Bayesian statis-
tics, we quantify the uncertainty of the disparity to enhance dis-
crimination assessments. We represent each decision-maker by its
disparity and the corresponding uncertainty in that disparity. We
define preferences over decision-makers and utilize brute force to
choose the optimal decision-maker according to a utility function
that ranks decision-makers based on these preferences. The decision-
maker with the highest utility score can be interpreted as the one for
whom we are most certain that it is fair.