Aligning Artificial Intelligence with Humans through Public Policy
- URL: http://arxiv.org/abs/2207.01497v1
- Date: Sat, 25 Jun 2022 21:31:14 GMT
- Title: Aligning Artificial Intelligence with Humans through Public Policy
- Authors: John Nay, James Daily
- Abstract summary: This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given that Artificial Intelligence (AI) increasingly permeates our lives, it
is critical that we systematically align AI objectives with the goals and
values of humans. The human-AI alignment problem stems from the impracticality
of explicitly specifying the rewards that AI models should receive for all the
actions they could take in all relevant states of the world. One possible
solution, then, is to leverage the capabilities of AI models to learn those
rewards implicitly from a rich source of data describing human values in a wide
range of contexts. The democratic policy-making process produces just such data
by developing specific rules, flexible standards, interpretable guidelines, and
generalizable precedents that synthesize citizens' preferences over potential
actions taken in many states of the world. Therefore, computationally encoding
public policies to make them legible to AI systems should be an important part
of a socio-technical approach to the broader human-AI alignment puzzle. This
Essay outlines research on AI that learn structures in policy data that can be
leveraged for downstream tasks. As a demonstration of the ability of AI to
comprehend policy, we provide a case study of an AI system that predicts the
relevance of proposed legislation to any given publicly traded company and its
likely effect on that company. We believe this represents the "comprehension"
phase of AI and policy, but leveraging policy as a key source of human values
to align AI requires "understanding" policy. Solving the alignment problem is
crucial to ensuring that AI is beneficial both individually (to the person or
group deploying the AI) and socially. As AI systems are given increasing
responsibility in high-stakes contexts, integrating democratically-determined
policy into those systems could align their behavior with human goals in a way
that is responsive to a constantly evolving society.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Role of Social Movements, Coalitions, and Workers in Resisting
Harmful Artificial Intelligence and Contributing to the Development of
Responsible AI [0.0]
Coalitions in all sectors are acting worldwide to resist hamful applications of AI.
There are biased, wrongful, and disturbing assumptions embedded in AI algorithms.
Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in life on earth.
arXiv Detail & Related papers (2021-07-11T18:51:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI Ethics Needs Good Data [0.8701566919381224]
We argue that discourse on AI must transcend the language of 'ethics' and engage with power and political economy.
We offer four 'economys' on which Good Data AI can be built: community, rights, usability and politics.
arXiv Detail & Related papers (2021-02-15T04:16:27Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.