Normative Challenges of Risk Regulation of Artificial Intelligence and
Automated Decision-Making
- URL: http://arxiv.org/abs/2211.06203v1
- Date: Fri, 11 Nov 2022 13:57:38 GMT
- Title: Normative Challenges of Risk Regulation of Artificial Intelligence and
Automated Decision-Making
- Authors: Carsten Orwat (1), Jascha Bareis (1), Anja Folberth (1 and 2), Jutta
Jahnel (1) and Christian Wadephul (1) ((1) Karlsruhe Institute of Technology,
Institute for Technology Assessment and Systems Analysis, (2) University of
Heidelberg, Institute of Political Science)
- Abstract summary: Recent proposals aim at regulating artificial intelligence (AI) and automated decision-making (ADM)
The most salient example is the Artificial Intelligence Act (AIA) proposed by the European Commission.
This article addresses challenges for adequate risk regulation that arise primarily from the specific type of risks involved.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent proposals aiming at regulating artificial intelligence (AI) and
automated decision-making (ADM) suggest a particular form of risk regulation,
i.e. a risk-based approach. The most salient example is the Artificial
Intelligence Act (AIA) proposed by the European Commission. The article
addresses challenges for adequate risk regulation that arise primarily from the
specific type of risks involved, i.e. risks to the protection of fundamental
rights and fundamental societal values. They result mainly from the normative
ambiguity of the fundamental rights and societal values in interpreting,
specifying or operationalising them for risk assessments. This is exemplified
for (1) human dignity, (2) informational self-determination, data protection
and privacy, (3) justice and fairness, and (4) the common good. Normative
ambiguities require normative choices, which are distributed among different
actors in the proposed AIA. Particularly critical normative choices are those
of selecting normative conceptions for specifying risks, aggregating and
quantifying risks including the use of metrics, balancing of value conflicts,
setting levels of acceptable risks, and standardisation. To avoid a lack of
democratic legitimacy and legal uncertainty, scientific and political debates
are suggested.
Related papers
- Risk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems [2.3266896180922187]
We compile an extensive catalog of risk sources and risk management measures for general-purpose AI systems.
This work involves identifying technical, operational, and societal risks across model development, training, and deployment stages.
The catalog is released under a public domain license for ease of direct use by stakeholders in AI governance and standards.
arXiv Detail & Related papers (2024-10-30T21:32:56Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act [0.0]
This work proposes a taxonomy focusing on (geo)political risks associated with AI.
It identifies 12 risks in total divided into four categories: (1) Geopolitical Pressures, (2) Malicious Usage, (3) Environmental, Social, and Ethical Risks, and (4) Privacy and Trust Violations.
arXiv Detail & Related papers (2024-04-17T15:32:56Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - RiskQ: Risk-sensitive Multi-Agent Reinforcement Learning Value Factorization [49.26510528455664]
We introduce the Risk-sensitive Individual-Global-Max (RIGM) principle as a generalization of the Individual-Global-Max (IGM) and Distributional IGM (DIGM) principles.
We show that RiskQ can obtain promising performance through extensive experiments.
arXiv Detail & Related papers (2023-11-03T07:18:36Z) - AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk
Disclosures [0.8702432681310399]
We propose a risk profiling standard which can guide downstream decision-making.
The standard is built on our proposed taxonomy of AI risks, which reflects a high-level categorization of the wide variety of risks proposed in the literature.
We apply this methodology to a number of prominent AI systems using publicly available information.
arXiv Detail & Related papers (2023-09-22T20:45:15Z) - Acceptable risks in Europe's proposed AI Act: Reasonableness and other
principles for deciding how much risk management is enough [0.0]
The Act aims to promote "trustworthy" AI with a proportionate regulatory burden.
Its provisions on risk acceptability require residual risks from high-risk systems to be reduced or eliminated "as far as possible"
This paper argues that the Parliament's approach is more workable, and better balances the goals of proportionality and trustworthiness.
arXiv Detail & Related papers (2023-07-26T09:21:58Z) - Adaptive Risk-Aware Bidding with Budget Constraint in Display
Advertising [47.14651340748015]
We propose a novel adaptive risk-aware bidding algorithm with budget constraint via reinforcement learning.
We theoretically unveil the intrinsic relation between the uncertainty and the risk tendency based on value at risk (VaR)
arXiv Detail & Related papers (2022-12-06T18:50:09Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.