Understanding Relations Between Perception of Fairness and Trust in
Algorithmic Decision Making
- URL: http://arxiv.org/abs/2109.14345v1
- Date: Wed, 29 Sep 2021 11:00:39 GMT
- Title: Understanding Relations Between Perception of Fairness and Trust in
Algorithmic Decision Making
- Authors: Jianlong Zhou, Sunny Verma, Mudit Mittal and Fang Chen
- Abstract summary: We aim to understand the relationship between induced algorithmic fairness and its perception in humans.
We also study how does induced algorithmic fairness affects user trust in algorithmic decision making.
- Score: 8.795591344648294
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Algorithmic processes are increasingly employed to perform managerial
decision making, especially after the tremendous success in Artificial
Intelligence (AI). This paradigm shift is occurring because these sophisticated
AI techniques are guaranteeing the optimality of performance metrics. However,
this adoption is currently under scrutiny due to various concerns such as
fairness, and how does the fairness of an AI algorithm affects user's trust is
much legitimate to pursue. In this regard, we aim to understand the
relationship between induced algorithmic fairness and its perception in humans.
In particular, we are interested in whether these two are positively correlated
and reflect substantive fairness. Furthermore, we also study how does induced
algorithmic fairness affects user trust in algorithmic decision making. To
understand this, we perform a user study to simulate candidate shortlisting by
introduced (manipulating mathematical) fairness in a human resource recruitment
setting. Our experimental results demonstrate that different levels of
introduced fairness are positively related to human perception of fairness, and
simultaneously it is also positively related to user trust in algorithmic
decision making. Interestingly, we also found that users are more sensitive to
the higher levels of introduced fairness than the lower levels of introduced
fairness. Besides, we summarize the theoretical and practical implications of
this research with a discussion on perception of fairness.
Related papers
- Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing [0.0]
The European Union's Artificial Intelligence Act takes effect on 1 August 2024.
High-risk AI applications must adhere to stringent transparency and fairness standards.
We propose a novel framework, which combines the strengths of counterfactual fairness and peer comparison strategy.
arXiv Detail & Related papers (2024-08-05T15:35:34Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - The FairCeptron: A Framework for Measuring Human Perceptions of
Algorithmic Fairness [1.4449464910072918]
The FairCeptron framework is an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification.
The framework includes fairness scenario generation, fairness perception elicitation and fairness perception analysis.
An implementation of the FairCeptron framework is openly available, and it can easily be adapted to study perceptions of algorithmic fairness in other application contexts.
arXiv Detail & Related papers (2021-02-08T10:47:24Z) - Fairness Perception from a Network-Centric Perspective [12.261689483681147]
We introduce a novel yet intuitive function known as network-centric fairness perception.
We show how the function can be extended to a group fairness metric known as fairness visibility.
We illustrate a potential pitfall of the fairness visibility measure that can be exploited to mislead individuals into perceiving that the algorithmic decisions are fair.
arXiv Detail & Related papers (2020-10-07T06:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.