Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical
Decisions
- URL: http://arxiv.org/abs/2106.16122v2
- Date: Tue, 2 Nov 2021 12:27:45 GMT
- Title: Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical
Decisions
- Authors: Sebastian Kr\"ugel, Andreas Ostermaier, Matthias Uhl
- Abstract summary: We find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data.
We suggest digital literacy as a potential remedy to ensure the responsible use of AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Departing from the claim that AI needs to be trustworthy, we find that
ethical advice from an AI-powered algorithm is trusted even when its users know
nothing about its training data and when they learn information about it that
warrants distrust. We conducted online experiments where the subjects took the
role of decision-makers who received advice from an algorithm on how to deal
with an ethical dilemma. We manipulated the information about the algorithm and
studied its influence. Our findings suggest that AI is overtrusted rather than
distrusted. We suggest digital literacy as a potential remedy to ensure the
responsible use of AI.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Humans, AI, and Context: Understanding End-Users' Trust in a Real-World
Computer Vision Application [22.00514030715286]
We provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application.
We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors.
We discuss the implications of our findings and provide recommendations for future research on trust in AI.
arXiv Detail & Related papers (2023-05-15T12:27:02Z) - Beyond Bias and Compliance: Towards Individual Agency and Plurality of
Ethics in AI [0.0]
We argue that the way data is labeled plays an essential role in the way AI behaves.
We propose an alternative path that allows for the plurality of values and the freedom of individual expression.
arXiv Detail & Related papers (2023-02-23T16:33:40Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The corruptive force of AI-generated advice [0.0]
We test whether AI-generated advice can corrupt people.
We also test whether transparency about AI presence mitigates potential harm.
Results reveal that AI's corrupting force is as strong as humans'
arXiv Detail & Related papers (2021-02-15T13:15:12Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Could regulating the creators deliver trustworthy AI? [2.588973722689844]
AI is becoming all pervasive and is often deployed in everyday technologies, devices and services without our knowledge.
Fear is compounded by the inability to point to a trustworthy source of AI.
Some consider trustworthy AI to be that which complies with relevant laws.
Others point to the requirement to comply with ethics and standards.
arXiv Detail & Related papers (2020-06-26T01:32:53Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.