Assessing High-Risk AI Systems under the EU AI Act: From Legal Requirements to Technical Verification
- URL: http://arxiv.org/abs/2512.13907v2
- Date: Mon, 22 Dec 2025 13:38:32 GMT
- Title: Assessing High-Risk AI Systems under the EU AI Act: From Legal Requirements to Technical Verification
- Authors: Alessio Buscemi, Tom Deckenbrunnen, Fahria Kabir, Kateryna Mishchenko, Nishat Mowla,
- Abstract summary: This paper presents a structured mapping that translates high-level AI Act requirements into concrete, implementable verification activities applicable across the AI lifecycle.<n>The mapping is derived through a systematic process in which legal requirements are decomposed into operational sub-requirements and grounded in authoritative standards and recognised practices.
- Score: 0.05673443699623395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The implementation of the AI Act requires practical mechanisms to verify compliance with legal obligations, yet concrete and operational mappings from high-level requirements to verifiable assessment activities remain limited, contributing to uneven readiness across Member States. This paper presents a structured mapping that translates high-level AI Act requirements into concrete, implementable verification activities applicable across the AI lifecycle. The mapping is derived through a systematic process in which legal requirements are decomposed into operational sub-requirements and grounded in authoritative standards and recognised practices. From this basis, verification activities are identified and characterised along two dimensions: the type of verification performed and the lifecycle target to which it applies. By making explicit the link between regulatory intent and technical and organisational assurance practices, the proposed mapping reduces interpretive uncertainty and provides a reusable reference for consistent, technology-agnostic compliance verification under the AI Act.
Related papers
- LawThinker: A Deep Research Legal Agent in Dynamic Environments [51.782293183431676]
LawThinker is an autonomous legal research agent.<n>It enforces verification as an atomic operation after every knowledge exploration step.<n>LawThinker achieves a 24% improvement over direct reasoning.
arXiv Detail & Related papers (2026-02-12T15:19:11Z) - Standards for trustworthy AI in the European Union: technical rationale, structural challenges, and an implementation path [0.0]
This white paper examines the technical foundations of European AI standardization under the AI Act.<n>It explains how harmonized standards enable the presumption of conformity mechanism, describes the CEN/CENELEC standardization process, and analyzes why AI poses unique challenges.
arXiv Detail & Related papers (2026-01-21T11:58:47Z) - AI Deployment Authorisation: A Global Standard for Machine-Readable Governance of High-Risk Artificial Intelligence [0.0]
This paper introduces the AI Deployment Authorisation Score (ADAS), a machine-readable regulatory framework that evaluates AI systems.<n>ADAS produces a cryptographically verifiable deployment certificate that regulators, insurers, and infrastructure operators can consume as a license to operate.
arXiv Detail & Related papers (2026-01-11T18:14:20Z) - Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives [0.9668407688201359]
Artificial Intelligence (AI) systems are increasingly deployed in legal contexts.<n>The so-called black box problem'' undermines legitimacy of automated decision-making.<n>XAI has proposed a variety of methods to enhance transparency.
arXiv Detail & Related papers (2025-10-13T07:19:15Z) - A five-layer framework for AI governance: integrating regulation, standards, and certification [0.6875312133832078]
The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation.<n>Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement.<n>A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes.
arXiv Detail & Related papers (2025-09-14T16:19:08Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - GLARE: Agentic Reasoning for Legal Judgment Prediction [60.13483016810707]
Legal judgment prediction (LJP) has become increasingly important in the legal field.<n>Existing large language models (LLMs) have significant problems of insufficient reasoning due to a lack of legal knowledge.<n>We introduce GLARE, an agentic legal reasoning framework that dynamically acquires key legal knowledge by invoking different modules.
arXiv Detail & Related papers (2025-08-22T13:38:12Z) - Explainable AI Systems Must Be Contestable: Here's How to Make It Happen [2.5875936082584623]
This paper presents the first rigorous formal definition of contestability in explainable AI.<n>We introduce a modular framework of by-design and post-hoc mechanisms spanning human-centered interfaces, technical processes, and organizational architectures.<n>Our work equips practitioners with the tools to embed genuine recourse and accountability into AI systems.
arXiv Detail & Related papers (2025-06-02T13:32:05Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - Mapping the Regulatory Learning Space for the EU AI Act [0.8987776881291145]
The EU AI Act represents the world's first transnational AI regulation with concrete enforcement measures.<n>It builds on existing EU mechanisms for regulating health and safety of products but extends them to protect fundamental rights.<n>We argue that this will lead to multiple uncertainties in the enforcement of the AI Act.
arXiv Detail & Related papers (2025-02-27T12:46:30Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act [40.233017376716305]
The EU's Artificial Intelligence Act (AI Act) is a significant step towards responsible AI development.<n>It lacks clear technical interpretation, making it difficult to assess models' compliance.<n>This work presents COMPL-AI, a comprehensive framework consisting of the first technical interpretation of the Act.
arXiv Detail & Related papers (2024-10-10T14:23:51Z) - RIRAG: Regulatory Information Retrieval and Answer Generation [51.998738311700095]
We introduce a task of generating question-passages pairs, where questions are automatically created and paired with relevant regulatory passages.<n>We create the ObliQA dataset, containing 27,869 questions derived from the collection of Abu Dhabi Global Markets (ADGM) financial regulation documents.<n>We design a baseline Regulatory Information Retrieval and Answer Generation (RIRAG) system and evaluate it with RePASs, a novel evaluation metric.
arXiv Detail & Related papers (2024-09-09T14:44:19Z) - Beyond One-Time Validation: A Framework for Adaptive Validation of Prognostic and Diagnostic AI-based Medical Devices [55.319842359034546]
Existing approaches often fall short in addressing the complexity of practically deploying these devices.
The presented framework emphasizes the importance of repeating validation and fine-tuning during deployment.
It is positioned within the current US and EU regulatory landscapes.
arXiv Detail & Related papers (2024-09-07T11:13:52Z) - An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards [1.9142148274342772]
The EU's AI Act will shift the focus of such organizations toward conformance with the technical requirements for regulatory compliance.
This paper offers a simple and repeatable mechanism for mapping the terms and requirements relevant to normative statements in regulations and standards.
arXiv Detail & Related papers (2024-08-21T18:21:09Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Towards an Enforceable GDPR Specification [49.1574468325115]
Privacy by Design (PbD) is prescribed by modern privacy regulations such as the EU's.
One emerging technique to realize PbD is enforcement (RE)
We present a set of requirements and an iterative methodology for creating formal specifications of legal provisions.
arXiv Detail & Related papers (2024-02-27T09:38:51Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.