Negative Human Rights as a Basis for Long-term AI Safety and Regulation
- URL: http://arxiv.org/abs/2208.14788v2
- Date: Thu, 20 Apr 2023 09:27:07 GMT
- Title: Negative Human Rights as a Basis for Long-term AI Safety and Regulation
- Authors: Ondrej Bajgar and Jan Horenovsky
- Abstract summary: General principles guiding autonomous AI systems to recognize and avoid harmful behaviours may need to be supported by a binding system of regulation.
They should also be specific enough for technical implementation.
This article draws inspiration from law to explain how negative human rights could fulfil the role of such principles.
- Score: 1.5229257192293197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: If autonomous AI systems are to be reliably safe in novel situations, they
will need to incorporate general principles guiding them to recognize and avoid
harmful behaviours. Such principles may need to be supported by a binding
system of regulation, which would need the underlying principles to be widely
accepted. They should also be specific enough for technical implementation.
Drawing inspiration from law, this article explains how negative human rights
could fulfil the role of such principles and serve as a foundation both for an
international regulatory system and for building technical safety constraints
for future AI systems.
Related papers
- From Principles to Rules: A Regulatory Approach for Frontier AI [2.1764247401772705]
Regulators may require frontier AI developers to adopt safety measures.
The requirements could be formulated as high-level principles or specific rules.
These regulatory approaches, known as 'principle-based' and 'rule-based' regulation, have complementary strengths and weaknesses.
arXiv Detail & Related papers (2024-07-10T01:45:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Considering Fundamental Rights in the European Standardisation of Artificial Intelligence: Nonsense or Strategic Alliance? [0.0]
This chapter aims to clarify the relationship between AI standards and fundamental rights.
The main issue tackled is whether the adoption of AI harmonised standards, based on the future AI Act, should take into account fundamental rights.
arXiv Detail & Related papers (2024-01-23T10:17:42Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Normative Ethics Principles for Responsible AI Systems: Taxonomy and
Future Directions [2.202803272456695]
We survey computer science literature and develop a taxonomy of 23 normative ethical principles which can be operationalised in AI.
We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in responsible AI systems.
arXiv Detail & Related papers (2022-08-12T08:48:16Z) - Regulating Safety and Security in Autonomous Robotic Systems [0.0]
Rules for autonomous systems are often difficult to formalise.
In the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed.
This allows novel applications to be assessed for their safety, but are difficult to formalise.
We are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems.
arXiv Detail & Related papers (2020-07-09T16:33:14Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.