Assessing the State of AI Policy
- URL: http://arxiv.org/abs/2407.21717v1
- Date: Wed, 31 Jul 2024 16:09:25 GMT
- Title: Assessing the State of AI Policy
- Authors: Joanna F. DeFranco, Luke Biersmith,
- Abstract summary: This work provides an overview of AI legislation and directives at the international, U.S. state, city and federal levels.
It also reviews relevant business standards, and technical society initiatives.
- Score: 0.5156484100374057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The deployment of artificial intelligence (AI) applications has accelerated rapidly. AI enabled technologies are facing the public in many ways including infrastructure, consumer products and home applications. Because many of these technologies present risks either in the form of physical injury, or bias, potentially yielding unfair outcomes, policy makers must consider the need for oversight. Most policymakers, however, lack the technical knowledge to judge whether an emerging AI technology is safe, effective, and requires oversight, therefore policy makers must depend on expert opinion. But policymakers are better served when, in addition to expert opinion, they have some general understanding of existing guidelines and regulations. This work provides an overview [the landscape] of AI legislation and directives at the international, U.S. state, city and federal levels. It also reviews relevant business standards, and technical society initiatives. Then an overlap and gap analysis are performed resulting in a reference guide that includes recommendations and guidance for future policy making.
Related papers
- Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Position Paper: Technical Research and Talent is Needed for Effective AI Governance [0.0]
We survey policy documents published by public-sector institutions in the EU, US, and China.
We highlight specific areas of disconnect between the technical requirements necessary for enacting proposed policy actions, and the current technical state of the art.
Our analysis motivates a call for tighter integration of the AI/ML research community within AI governance.
arXiv Detail & Related papers (2024-06-11T06:32:28Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - How VADER is your AI? Towards a definition of artificial intelligence
systems appropriate for regulation [41.94295877935867]
Recent AI regulation proposals adopt AI definitions affecting ICT techniques, approaches, and systems that are not AI.
We propose a framework to score how validated as appropriately-defined for regulation (VADER) an AI definition is.
arXiv Detail & Related papers (2024-02-07T17:41:15Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK [1.5039745292757671]
We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
arXiv Detail & Related papers (2023-04-20T07:53:07Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.