Negative Human Rights as a Basis for Long-term AI Safety and Regulation
- URL: http://arxiv.org/abs/2208.14788v2
- Date: Thu, 20 Apr 2023 09:27:07 GMT
- Title: Negative Human Rights as a Basis for Long-term AI Safety and Regulation
- Authors: Ondrej Bajgar and Jan Horenovsky
- Abstract summary: General principles guiding autonomous AI systems to recognize and avoid harmful behaviours may need to be supported by a binding system of regulation.
They should also be specific enough for technical implementation.
This article draws inspiration from law to explain how negative human rights could fulfil the role of such principles.
- Score: 1.5229257192293197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: If autonomous AI systems are to be reliably safe in novel situations, they
will need to incorporate general principles guiding them to recognize and avoid
harmful behaviours. Such principles may need to be supported by a binding
system of regulation, which would need the underlying principles to be widely
accepted. They should also be specific enough for technical implementation.
Drawing inspiration from law, this article explains how negative human rights
could fulfil the role of such principles and serve as a foundation both for an
international regulatory system and for building technical safety constraints
for future AI systems.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - From Principles to Rules: A Regulatory Approach for Frontier AI [2.1764247401772705]
Regulators may require frontier AI developers to adopt safety measures.
The requirements could be formulated as high-level principles or specific rules.
These regulatory approaches, known as 'principle-based' and 'rule-based' regulation, have complementary strengths and weaknesses.
arXiv Detail & Related papers (2024-07-10T01:45:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Considering Fundamental Rights in the European Standardisation of Artificial Intelligence: Nonsense or Strategic Alliance? [0.0]
This chapter aims to clarify the relationship between AI standards and fundamental rights.
The main issue tackled is whether the adoption of AI harmonised standards, based on the future AI Act, should take into account fundamental rights.
arXiv Detail & Related papers (2024-01-23T10:17:42Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Regulating Safety and Security in Autonomous Robotic Systems [0.0]
Rules for autonomous systems are often difficult to formalise.
In the space and nuclear sectors applications are more likely to differ, so a set of general safety principles has developed.
This allows novel applications to be assessed for their safety, but are difficult to formalise.
We are collaborating with regulators and the community in the space and nuclear sectors to develop guidelines for autonomous and robotic systems.
arXiv Detail & Related papers (2020-07-09T16:33:14Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.