Prioritizing Policies for Furthering Responsible Artificial Intelligence
in the United States
- URL: http://arxiv.org/abs/2212.00740v1
- Date: Wed, 30 Nov 2022 18:45:55 GMT
- Title: Prioritizing Policies for Furthering Responsible Artificial Intelligence
in the United States
- Authors: Emily Hadley
- Abstract summary: Given limited resources, not all policies can or should be equally prioritized.
We recommend that U.S. government agencies and companies highly prioritize development of pre-deployment audits and assessments.
We suggest that U.S. government agencies and professional societies should highly prioritize policies that support responsible AI research.
- Score: 0.456877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several policy options exist, or have been proposed, to further responsible
artificial intelligence (AI) development and deployment. Institutions,
including U.S. government agencies, states, professional societies, and private
and public sector businesses, are well positioned to implement these policies.
However, given limited resources, not all policies can or should be equally
prioritized. We define and review nine suggested policies for furthering
responsible AI, rank each policy on potential use and impact, and recommend
prioritization relative to each institution type. We find that pre-deployment
audits and assessments and post-deployment accountability are likely to have
the highest impact but also the highest barriers to adoption. We recommend that
U.S. government agencies and companies highly prioritize development of
pre-deployment audits and assessments, while the U.S. national legislature
should highly prioritize post-deployment accountability. We suggest that U.S.
government agencies and professional societies should highly prioritize
policies that support responsible AI research and that states should highly
prioritize support of responsible AI education. We propose that companies can
highly prioritize involving community stakeholders in development efforts and
supporting diversity in AI development. We advise lower levels of
prioritization across institutions for AI ethics statements and databases of AI
technologies or incidents. We recognize that no one policy will lead to
responsible AI and instead advocate for strategic policy implementation across
institutions.
Related papers
- How Do AI Companies "Fine-Tune" Policy? Examining Regulatory Capture in AI Governance [0.7252636622264104]
Industry actors in the United States have gained extensive influence about the regulation of general-purpose artificial intelligence (AI) systems.
Capture of AI policy by AI developers and deployers could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems.
Experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals over others.
arXiv Detail & Related papers (2024-10-16T21:06:54Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Strategic AI Governance: Insights from Leading Nations [0.0]
Artificial Intelligence (AI) has the potential to revolutionize various sectors, yet its adoption is often hindered by concerns about data privacy, security, and the understanding of AI capabilities.
This paper synthesizes AI governance approaches, strategic themes, and enablers and challenges for AI adoption by reviewing national AI strategies from leading nations.
arXiv Detail & Related papers (2024-09-16T06:00:42Z) - Assessing the State of AI Policy [0.5156484100374057]
This work provides an overview of AI legislation and directives at the international, U.S. state, city and federal levels.
It also reviews relevant business standards, and technical society initiatives.
arXiv Detail & Related papers (2024-07-31T16:09:25Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.