A risk-based approach to assessing liability risk for AI-driven harms
considering EU liability directive
- URL: http://arxiv.org/abs/2401.11697v1
- Date: Mon, 18 Dec 2023 15:52:43 GMT
- Title: A risk-based approach to assessing liability risk for AI-driven harms
considering EU liability directive
- Authors: Sundaraparipurnan Narayanan, Mark Potkewitz
- Abstract summary: Historical instances of harm caused by AI have led to European Union establishing an AI Liability Directive.
The future ability of provider to contest a product liability claim will depend on good practices adopted in designing, developing, and maintaining AI systems.
This paper provides a risk-based approach to examining liability for AI-driven injuries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence can cause inconvenience, harm, or other unintended
consequences in various ways, including those that arise from defects or
malfunctions in the AI system itself or those caused by its use or misuse.
Responsibility for AI harms or unintended consequences must be addressed to
hold accountable the people who caused such harms and ensure that victims
receive compensation for any damages or losses they may have sustained.
Historical instances of harm caused by AI have led to European Union
establishing an AI Liability Directive. The directive aims to lay down a
uniform set of rules for access to information, delineate the duty and level of
care required for AI development and use, and clarify the burden of proof for
damages or harms caused by AI systems, establishing broader protection for
victims. The future ability of provider to contest a product liability claim
will depend on good practices adopted in designing, developing, and maintaining
AI systems in the market. This paper provides a risk-based approach to
examining liability for AI-driven injuries. It also provides an overview of
existing liability approaches, insights into limitations and complexities in
these approaches, and a detailed self-assessment questionnaire to assess the
risk associated with liability for a specific AI system from a provider's
perspective.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Liability and Insurance for Catastrophic Losses: the Nuclear Power Precedent and Lessons for AI [0.0]
This paper argues that developers of frontier AI models should be assigned limited, strict, and exclusive third party liability for harms resulting from Critical AI Occurrences (CAIOs)
Mandatory insurance for CAIO liability is recommended to overcome developers' judgment-proofness, winner's curse dynamics, and leverage insurers' quasi-regulatory abilities.
arXiv Detail & Related papers (2024-09-10T17:41:31Z) - Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment [17.026921603767722]
The study introduces our Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives.
By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks.
arXiv Detail & Related papers (2024-08-02T22:40:20Z) - What's my role? Modelling responsibility for AI-based safety-critical
systems [1.0549609328807565]
It is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.
A human operator can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating.
This paper considers different senses of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety.
arXiv Detail & Related papers (2023-12-30T13:45:36Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Hazard Management: A framework for the systematic management of root
causes for AI risks [0.0]
This paper introduces the AI Hazard Management (AIHM) framework.
It provides a structured process to systematically identify, assess, and treat AI hazards.
It builds upon an AI hazard list from a comprehensive state-of-the-art analysis.
arXiv Detail & Related papers (2023-10-25T15:55:50Z) - AI Liability Insurance With an Example in AI-Powered E-diagnosis System [22.102728605081534]
We use an AI-powered E-diagnosis system as an example to study AI liability insurance.
We show that AI liability insurance can act as a regulatory mechanism to incentivize compliant behaviors and serve as a certificate of high-quality AI systems.
arXiv Detail & Related papers (2023-06-01T21:03:47Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.