Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof
- URL: http://arxiv.org/abs/2211.01817v1
- Date: Thu, 3 Nov 2022 13:55:36 GMT
- Title: Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof
- Authors: David Fern\'andez Llorca, Vicky Charisi, Ronan Hamon, Ignacio
S\'anchez, Emilia G\'omez
- Abstract summary: New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
- Score: 1.7510020208193926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: New emerging technologies powered by Artificial Intelligence (AI) have the
potential to disruptively transform our societies for the better. In
particular, data-driven learning approaches (i.e., Machine Learning (ML)) have
been a true revolution in the advancement of multiple technologies in various
application domains. But at the same time there is growing concerns about
certain intrinsic characteristics of these methodologies that carry potential
risks to both safety and fundamental rights. Although there are mechanisms in
the adoption process to minimize these risks (e.g., safety regulations), these
do not exclude the possibility of harm occurring, and if this happens, victims
should be able to seek compensation. Liability regimes will therefore play a
key role in ensuring basic protection for victims using or interacting with
these systems. However, the same characteristics that make AI systems
inherently risky, such as lack of causality, opacity, unpredictability or their
self and continuous learning capabilities, lead to considerable difficulties
when it comes to proving causation. This paper presents three case studies, as
well as the methodology to reach them, that illustrate these difficulties.
Specifically, we address the cases of cleaning robots, delivery drones and
robots in education. The outcome of the proposed analysis suggests the need to
revise liability regimes to alleviate the burden of proof on victims in cases
involving AI technologies.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Considerations Influencing Offense-Defense Dynamics From Artificial Intelligence [0.0]
AI can enhance defensive capabilities but also presents avenues for malicious exploitation and large-scale societal harm.
This paper proposes a taxonomy to map and examine the key factors that influence whether AI systems predominantly pose threats or offer protective benefits to society.
arXiv Detail & Related papers (2024-12-05T10:05:53Z) - Artificial intelligence and cybersecurity in banking sector: opportunities and risks [0.0]
Machine learning (ML) enables systems to adapt and learn from vast datasets.
This study highlights the dual-use nature of AI tools, which can be used by malicious users.
The paper emphasizes the importance of developing machine learning models with key characteristics such as security, trust, resilience and robustness.
arXiv Detail & Related papers (2024-11-28T22:09:55Z) - From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - Threats, Attacks, and Defenses in Machine Unlearning: A Survey [14.03428437751312]
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI.
This survey aims to fill the gap between the extensive number of studies on threats, attacks, and defenses in machine unlearning.
arXiv Detail & Related papers (2024-03-20T15:40:18Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.