AI loyalty: A New Paradigm for Aligning Stakeholder Interests
- URL: http://arxiv.org/abs/2003.11157v1
- Date: Tue, 24 Mar 2020 23:55:59 GMT
- Title: AI loyalty: A New Paradigm for Aligning Stakeholder Interests
- Authors: Anthony Aguirre, Gaia Dempsey, Harry Surden, and Peter B. Reiner
- Abstract summary: We argue that AI loyalty should be considered during the technological design process alongside other important values in AI ethics.
We discuss a range of mechanisms that could support incorporation of AI loyalty into a variety of future AI systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When we consult with a doctor, lawyer, or financial advisor, we generally
assume that they are acting in our best interests. But what should we assume
when it is an artificial intelligence (AI) system that is acting on our behalf?
Early examples of AI assistants like Alexa, Siri, Google, and Cortana already
serve as a key interface between consumers and information on the web, and
users routinely rely upon AI-driven systems like these to take automated
actions or provide information. Superficially, such systems may appear to be
acting according to user interests. However, many AI systems are designed with
embedded conflicts of interests, acting in ways that subtly benefit their
creators (or funders) at the expense of users. To address this problem, in this
paper we introduce the concept of AI loyalty. AI systems are loyal to the
degree that they are designed to minimize, and make transparent, conflicts of
interest, and to act in ways that prioritize the interests of users. Properly
designed, such systems could have considerable functional and competitive - not
to mention ethical - advantages relative to those that do not. Loyal AI
products hold an obvious appeal for the end-user and could serve to promote the
alignment of the long-term interests of AI developers and customers. To this
end, we suggest criteria for assessing whether an AI system is sufficiently
transparent about conflicts of interest, and acting in a manner that is loyal
to the user, and argue that AI loyalty should be considered during the
technological design process alongside other important values in AI ethics such
as fairness, accountability privacy, and equity. We discuss a range of
mechanisms, from pure market forces to strong regulatory frameworks, that could
support incorporation of AI loyalty into a variety of future AI systems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Taking AI Welfare Seriously [0.5617572524191751]
We argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.
It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously.
arXiv Detail & Related papers (2024-11-04T17:57:57Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management [0.0]
Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place.
We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative design principles.
arXiv Detail & Related papers (2021-09-21T21:22:58Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.