Foundations for Risk Assessment of AI in Protecting Fundamental Rights
- URL: http://arxiv.org/abs/2507.18290v1
- Date: Thu, 24 Jul 2025 10:52:22 GMT
- Title: Foundations for Risk Assessment of AI in Protecting Fundamental Rights
- Authors: Antonino Rotolo, Beatrice Ferrigno, Jose Miguel Angel Garcia Godinez, Claudio Novelli, Giovanni Sartor,
- Abstract summary: This chapter introduces a conceptual framework for qualitative risk assessment of AI.<n>It addresses the complexities of legal compliance and fundamental rights protection by itegrating definitional balancing and defeasible reasoning.
- Score: 0.5093073566064981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This chapter introduces a conceptual framework for qualitative risk assessment of AI, particularly in the context of the EU AI Act. The framework addresses the complexities of legal compliance and fundamental rights protection by itegrating definitional balancing and defeasible reasoning. Definitional balancing employs proportionality analysis to resolve conflicts between competing rights, while defeasible reasoning accommodates the dynamic nature of legal decision-making. Our approach stresses the need for an analysis of AI deployment scenarios and for identifying potential legal violations and multi-layered impacts on fundamental rights. On the basis of this analysis, we provide philosophical foundations for a logical account of AI risk analysis. In particular, we consider the basic building blocks for conceptually grasping the interaction between AI deployment scenarios and fundamental rights, incorporating in defeasible reasoning definitional balancing and arguments about the contextual promotion or demotion of rights. This layered approach allows for more operative models of assessment of both high-risk AI systems and General Purpose AI (GPAI) systems, emphasizing the broader applicability of the latter. Future work aims to develop a formal model and effective algorithms to enhance AI risk assessment, bridging theoretical insights with practical applications to support responsible AI governance.
Related papers
- Alignment and Safety in Large Language Models: Safety Mechanisms, Training Paradigms, and Emerging Challenges [47.14342587731284]
This survey provides a comprehensive overview of alignment techniques, training protocols, and empirical findings in large language models (LLMs) alignment.<n>We analyze the development of alignment methods across diverse paradigms, characterizing the fundamental trade-offs between core alignment objectives.<n>We discuss state-of-the-art techniques, including Direct Preference Optimization (DPO), Constitutional AI, brain-inspired methods, and alignment uncertainty quantification (AUQ)
arXiv Detail & Related papers (2025-07-25T20:52:58Z) - The Value of Disagreement in AI Design, Evaluation, and Alignment [0.0]
Disagreements are widespread across the design, evaluation, and alignment pipelines of AI systems.<n>Standard practices in AI development often obscure or eliminate disagreement, resulting in an engineered homogenization.<n>We develop a normative framework to guide practical reasoning about disagreement in the AI lifecycle.
arXiv Detail & Related papers (2025-05-12T17:22:30Z) - Towards Developing Ethical Reasoners: Integrating Probabilistic Reasoning and Decision-Making for Complex AI Systems [4.854297874710511]
A computational ethics framework is essential for AI and autonomous systems operating in complex, real-world environments.<n>Existing approaches often lack the adaptability needed to integrate ethical principles into dynamic and ambiguous contexts.<n>We outline the necessary ingredients for building a holistic, meta-level framework that combines intermediate representations, probabilistic reasoning, and knowledge representation.
arXiv Detail & Related papers (2025-02-28T17:25:11Z) - Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles [24.448749292993234]
The Helpful, Honest, and Harmless (HHH) principle is a framework for aligning AI systems with human values.<n>We argue for an adaptive interpretation of the HHH principle and propose a reference framework for its adaptation to diverse scenarios.<n>This work offers practical insights for improving AI alignment, ensuring that HHH principles remain both grounded and operationally effective in real-world AI deployment.
arXiv Detail & Related papers (2025-02-09T22:41:24Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Probabilistic Analysis of Copyright Disputes and Generative AI Safety [0.0]
This paper presents a probabilistic approach to analyzing copyright infringement disputes.<n>The usefulness of this approach is showcased through its application to the inverse ratio rule''
arXiv Detail & Related papers (2024-10-01T08:05:19Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - 'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI [0.0]
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI.
arXiv Detail & Related papers (2023-03-24T14:01:00Z) - On the Need and Applicability of Causality for Fairness: A Unified Framework for AI Auditing and Legal Analysis [0.0]
Article explores the significance of causal reasoning in addressing algorithmic discrimination.<n>By reviewing landmark cases and regulatory frameworks, we illustrate the challenges inherent in proving causal claims.
arXiv Detail & Related papers (2022-07-08T10:37:22Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI [0.0]
We propose the practical application of an enhanced well-being impact assessment framework for Autonomous and Intelligent Systems.
This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems.
arXiv Detail & Related papers (2020-07-29T13:26:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.