The FairCeptron: A Framework for Measuring Human Perceptions of
Algorithmic Fairness
- URL: http://arxiv.org/abs/2102.04119v1
- Date: Mon, 8 Feb 2021 10:47:24 GMT
- Title: The FairCeptron: A Framework for Measuring Human Perceptions of
Algorithmic Fairness
- Authors: Georg Ahnert, Ivan Smirnov, Florian Lemmerich, Claudia Wagner, Markus
Strohmaier
- Abstract summary: The FairCeptron framework is an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification.
The framework includes fairness scenario generation, fairness perception elicitation and fairness perception analysis.
An implementation of the FairCeptron framework is openly available, and it can easily be adapted to study perceptions of algorithmic fairness in other application contexts.
- Score: 1.4449464910072918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Measures of algorithmic fairness often do not account for human perceptions
of fairness that can substantially vary between different sociodemographics and
stakeholders. The FairCeptron framework is an approach for studying perceptions
of fairness in algorithmic decision making such as in ranking or
classification. It supports (i) studying human perceptions of fairness and (ii)
comparing these human perceptions with measures of algorithmic fairness. The
framework includes fairness scenario generation, fairness perception
elicitation and fairness perception analysis. We demonstrate the FairCeptron
framework by applying it to a hypothetical university admission context where
we collect human perceptions of fairness in the presence of minorities. An
implementation of the FairCeptron framework is openly available, and it can
easily be adapted to study perceptions of algorithmic fairness in other
application contexts. We hope our work paves the way towards elevating the role
of studies of human fairness perceptions in the process of designing
algorithmic decision making systems.
Related papers
- A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Understanding Relations Between Perception of Fairness and Trust in
Algorithmic Decision Making [8.795591344648294]
We aim to understand the relationship between induced algorithmic fairness and its perception in humans.
We also study how does induced algorithmic fairness affects user trust in algorithmic decision making.
arXiv Detail & Related papers (2021-09-29T11:00:39Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Fairness Perception from a Network-Centric Perspective [12.261689483681147]
We introduce a novel yet intuitive function known as network-centric fairness perception.
We show how the function can be extended to a group fairness metric known as fairness visibility.
We illustrate a potential pitfall of the fairness visibility measure that can be exploited to mislead individuals into perceiving that the algorithmic decisions are fair.
arXiv Detail & Related papers (2020-10-07T06:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.