Adopting AI: How Familiarity Breeds Both Trust and Contempt
- URL: http://arxiv.org/abs/2305.01405v1
- Date: Tue, 2 May 2023 13:26:54 GMT
- Title: Adopting AI: How Familiarity Breeds Both Trust and Contempt
- Authors: Michael C. Horowitz, Lauren Kahn, Julia Macdonald, Jacquelyn Schneider
- Abstract summary: We look at the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense.
Those with familiarity and expertise with AI were more likely to support all of the autonomous applications we tested.
Individuals are also less likely to support AI-enabled technologies when applied directly to their life.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite pronouncements about the inevitable diffusion of artificial
intelligence and autonomous technologies, in practice it is human behavior, not
technology in a vacuum, that dictates how technology seeps into -- and changes
-- societies. In order to better understand how human preferences shape
technological adoption and the spread of AI-enabled autonomous technologies, we
look at representative adult samples of US public opinion in 2018 and 2020 on
the use of four types of autonomous technologies: vehicles, surgery, weapons,
and cyber defense. By focusing on these four diverse uses of AI-enabled
autonomy that span transportation, medicine, and national security, we exploit
the inherent variation between these AI-enabled autonomous use cases. We find
that those with familiarity and expertise with AI and similar technologies were
more likely to support all of the autonomous applications we tested (except
weapons) than those with a limited understanding of the technology. Individuals
that had already delegated the act of driving by using ride-share apps were
also more positive about autonomous vehicles. However, familiarity cut both
ways; individuals are also less likely to support AI-enabled technologies when
applied directly to their life, especially if technology automates tasks they
are already familiar with operating. Finally, opposition to AI-enabled military
applications has slightly increased over time.
Related papers
- A roadmap for AI in robotics [55.87087746398059]
We are witnessing growing excitement in robotics at the prospect of leveraging the potential of AI to tackle some of the outstanding barriers to the full deployment of robots in our daily lives.<n>This article offers an assessment of what AI for robotics has achieved since the 1990s and proposes a short- and medium-term research roadmap listing challenges and promises.
arXiv Detail & Related papers (2025-07-26T15:18:28Z) - Alignment, Agency and Autonomy in Frontier AI: A Systems Engineering Perspective [0.0]
Concepts of alignment, agency, and autonomy have become central to AI safety, governance, and control.
This paper traces the historical, philosophical, and technical evolution of these concepts, emphasizing how their definitions influence AI development, deployment, and oversight.
arXiv Detail & Related papers (2025-02-20T21:37:20Z) - AI Generations: From AI 1.0 to AI 4.0 [3.4440023363051266]
This paper proposes that Artificial Intelligence (AI) progresses through several overlapping generations.
Each of these AI generations is driven by shifting priorities among algorithms, computing power, and data.
It explores the profound ethical, regulatory, and philosophical challenges that arise when artificial systems approach (or aspire to) human-like autonomy.
arXiv Detail & Related papers (2025-02-16T23:19:44Z) - Shaping AI's Impact on Billions of Lives [27.78474296888659]
We argue for the community of AI practitioners to consciously and proactively work for the common good.
This paper offers a blueprint for a new type of innovation infrastructure.
arXiv Detail & Related papers (2024-12-03T16:29:37Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Technological Perspective on Misuse of Available AI [41.94295877935867]
Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
arXiv Detail & Related papers (2024-03-22T16:30:58Z) - Brief for the Canada House of Commons Study on the Implications of
Artificial Intelligence Technologies for the Canadian Labor Force: Generative
Artificial Intelligence Shatters Models of AI and Labor [1.0878040851638]
As with past technologies, generative AI may not lead to mass unemployment.
generative AI is creative, cognitive, and potentially ubiquitous.
As AI's full set of capabilities and applications emerge, policy makers should promote workers' career adaptability.
arXiv Detail & Related papers (2023-11-06T22:58:24Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Needs and Artificial Intelligence [0.0]
We reflect on the relationship between needs and AI, and call for the realisation of needs-aware AI systems.
We discuss some of the most critical gaps, barriers, enablers, and drivers of co-creating future AI-based socio-technical systems.
arXiv Detail & Related papers (2022-02-18T15:16:22Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Toward a Rational and Ethical Sociotechnical System of Autonomous
Vehicles: A Novel Application of Multi-Criteria Decision Analysis [0.0]
The expansion of artificial intelligence (AI) and autonomous systems has shown the potential to generate enormous social good.
There is a pressing need to address relevant social concerns to allow for the development of systems of intelligent agents.
arXiv Detail & Related papers (2021-02-04T23:52:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.