Standards for trustworthy AI in the European Union: technical rationale, structural challenges, and an implementation path
- URL: http://arxiv.org/abs/2602.00078v1
- Date: Wed, 21 Jan 2026 11:58:47 GMT
- Title: Standards for trustworthy AI in the European Union: technical rationale, structural challenges, and an implementation path
- Authors: Piercosma Bisconti, Marcello Galisai,
- Abstract summary: This white paper examines the technical foundations of European AI standardization under the AI Act.<n>It explains how harmonized standards enable the presumption of conformity mechanism, describes the CEN/CENELEC standardization process, and analyzes why AI poses unique challenges.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This white paper examines the technical foundations of European AI standardization under the AI Act. It explains how harmonized standards enable the presumption of conformity mechanism, describes the CEN/CENELEC standardization process, and analyzes why AI poses unique standardization challenges including stochastic behavior, data dependencies, immature evaluation practices, and lifecycle dynamics. The paper argues that AI systems are typically components within larger sociotechnical systems, requiring a layered approach where horizontal standards define process obligations and evidence structures while sectoral profiles specify domain-specific thresholds and acceptance criteria. It proposes a workable scheme based on risk management, reproducible technical checks redefined as stability of measured properties, structured documentation, comprehensive logging, and assurance cases that evolve over the system lifecycle. The paper demonstrates that despite methodological difficulties, technical standards remain essential for translating legal obligations into auditable engineering practice and enabling scalable conformity assessment across providers, assessors, and enforcement authorities
Related papers
- Lost in Vagueness: Towards Context-Sensitive Standards for Robustness Assessment under the EU AI Act [2.740981829798319]
Robustness is a key requirement for high-risk AI systems under the EU Artificial Intelligence Act (AI Act)<n>This paper investigates what it means for AI systems to be robust and illustrates the need for context-sensitive standardisation.
arXiv Detail & Related papers (2025-11-19T17:06:36Z) - Understanding AI Trustworthiness: A Scoping Review of AIES & FAccT Articles [41.419459280691605]
Trustworthy AI serves as a foundational pillar for two major AI ethics conferences: AIES and FAccT.<n>This scoping review aims to examine how the AIES and FAccT communities conceptualize, measure, and validate AI trustworthiness.
arXiv Detail & Related papers (2025-10-24T09:40:38Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Explainable AI Systems Must Be Contestable: Here's How to Make It Happen [2.5875936082584623]
This paper presents the first rigorous formal definition of contestability in explainable AI.<n>We introduce a modular framework of by-design and post-hoc mechanisms spanning human-centered interfaces, technical processes, and organizational architectures.<n>Our work equips practitioners with the tools to embed genuine recourse and accountability into AI systems.
arXiv Detail & Related papers (2025-06-02T13:32:05Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act [40.233017376716305]
The EU's Artificial Intelligence Act (AI Act) is a significant step towards responsible AI development.<n>It lacks clear technical interpretation, making it difficult to assess models' compliance.<n>This work presents COMPL-AI, a comprehensive framework consisting of the first technical interpretation of the Act.
arXiv Detail & Related papers (2024-10-10T14:23:51Z) - An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards [1.9142148274342772]
The EU's AI Act will shift the focus of such organizations toward conformance with the technical requirements for regulatory compliance.
This paper offers a simple and repeatable mechanism for mapping the terms and requirements relevant to normative statements in regulations and standards.
arXiv Detail & Related papers (2024-08-21T18:21:09Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.