Perceptions of Fairness and Trustworthiness Based on Explanations in
Human vs. Automated Decision-Making
- URL: http://arxiv.org/abs/2109.05792v1
- Date: Mon, 13 Sep 2021 09:14:15 GMT
- Title: Perceptions of Fairness and Trustworthiness Based on Explanations in
Human vs. Automated Decision-Making
- Authors: Jakob Schoeffer, Yvette Machowski, Niklas Kuehl
- Abstract summary: Automated decision systems (ADS) have become ubiquitous in many high-stakes domains.
We conduct an online study with 200 participants to examine people's perceptions of fairness and trustworthiness towards ADS.
We find that people perceive ADS as fairer than human decision-makers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated decision systems (ADS) have become ubiquitous in many high-stakes
domains. Those systems typically involve sophisticated yet opaque artificial
intelligence (AI) techniques that seldom allow for full comprehension of their
inner workings, particularly for affected individuals. As a result, ADS are
prone to deficient oversight and calibration, which can lead to undesirable
(e.g., unfair) outcomes. In this work, we conduct an online study with 200
participants to examine people's perceptions of fairness and trustworthiness
towards ADS in comparison to a scenario where a human instead of an ADS makes a
high-stakes decision -- and we provide thorough identical explanations
regarding decisions in both cases. Surprisingly, we find that people perceive
ADS as fairer than human decision-makers. Our analyses also suggest that
people's AI literacy affects their perceptions, indicating that people with
higher AI literacy favor ADS more strongly over human decision-makers, whereas
low-AI-literacy people exhibit no significant differences in their perceptions.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Intelligent Decision Assistance Versus Automated Decision-Making:
Enhancing Knowledge Work Through Explainable Artificial Intelligence [0.0]
We propose a new class of DSS, namely Intelligent Decision Assistance (IDA)
IDA supports knowledge workers without influencing them through automated decision-making.
Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations.
arXiv Detail & Related papers (2021-09-28T15:57:21Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.