Voluntary safety commitments provide an escape from over-regulation in
AI development
- URL: http://arxiv.org/abs/2104.03741v1
- Date: Thu, 8 Apr 2021 12:54:56 GMT
- Title: Voluntary safety commitments provide an escape from over-regulation in
AI development
- Authors: The Anh Han, Tom Lenaerts, Francisco C. Santos, and Luis Moniz Pereira
- Abstract summary: This work reveals for the first time how voluntary commitments, with sanctions either by peers or an institution, leads to socially beneficial outcomes.
Results are directly relevant for the design of governance and regulatory policies that aim to ensure an ethical and responsible AI technology development process.
- Score: 8.131948859165432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the introduction of Artificial Intelligence (AI) and related
technologies in our daily lives, fear and anxiety about their misuse as well as
the hidden biases in their creation have led to a demand for regulation to
address such issues. Yet blindly regulating an innovation process that is not
well understood, may stifle this process and reduce benefits that society may
gain from the generated technology, even under the best intentions. In this
paper, starting from a baseline model that captures the fundamental dynamics of
a race for domain supremacy using AI technology, we demonstrate how socially
unwanted outcomes may be produced when sanctioning is applied unconditionally
to risk-taking, i.e. potentially unsafe, behaviours. As an alternative to
resolve the detrimental effect of over-regulation, we propose a voluntary
commitment approach wherein technologists have the freedom of choice between
independently pursuing their course of actions or establishing binding
agreements to act safely, with sanctioning of those that do not abide to what
they pledged. Overall, this work reveals for the first time how voluntary
commitments, with sanctions either by peers or an institution, leads to
socially beneficial outcomes in all scenarios envisageable in a short-term race
towards domain supremacy through AI technology. These results are directly
relevant for the design of governance and regulatory policies that aim to
ensure an ethical and responsible AI technology development process.
Related papers
- An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - Crossing the principle-practice gap in AI ethics with ethical problem-solving [0.0]
How to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem.
EPS is a methodology promoting responsible, human-centric, and value-oriented AI development.
We utilize EPS as a blueprint to propose the implementation of Ethics as a Service Platform.
arXiv Detail & Related papers (2024-04-16T14:35:13Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - Dual Governance: The intersection of centralized regulation and
crowdsourced safety mechanisms for Generative AI [1.2691047660244335]
Generative Artificial Intelligence (AI) has seen mainstream adoption lately, especially in the form of consumer-facing, open-ended, text and image generating models.
The potential for generative AI to displace human creativity and livelihoods has also been under intense scrutiny.
Existing and proposed centralized regulations by governments to rein in AI face criticisms such as not having sufficient clarity or uniformity.
Decentralized protections via crowdsourced safety tools and mechanisms are a potential alternative.
arXiv Detail & Related papers (2023-08-02T23:25:21Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.