A five-layer framework for AI governance: integrating regulation, standards, and certification
- URL: http://arxiv.org/abs/2509.11332v1
- Date: Sun, 14 Sep 2025 16:19:08 GMT
- Title: A five-layer framework for AI governance: integrating regulation, standards, and certification
- Authors: Avinash Agarwal, Manisha J. Nene,
- Abstract summary: The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation.<n>Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement.<n>A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes.
- Score: 0.6875312133832078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation. Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement. This paper addresses this critical gap in AI governance. Methodology/Approach: A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes. By narrowing its scope through progressively focused layers, the framework provides a structured pathway to meet technical, regulatory, and ethical requirements. Its applicability is validated through two case studies on AI fairness and AI incident reporting. Findings: The case studies demonstrate the framework's ability to identify gaps in legal mandates, standardization, and implementation. It adapts to both global and region-specific AI governance needs, mapping regulatory mandates with practical applications to improve compliance and risk management. Practical Implications - By offering a clear and actionable roadmap, this work contributes to global AI governance by equipping policymakers, regulators, and industry stakeholders with a model to enhance compliance and risk management. Social Implications: The framework supports the development of policies that build public trust and promote the ethical use of AI for the benefit of society. Originality/Value: This study proposes a five-layer AI governance framework that bridges high-level regulatory mandates and implementation guidelines. Validated through case studies on AI fairness and incident reporting, it identifies gaps such as missing standardized assessment procedures and reporting mechanisms, providing a structured foundation for targeted governance measures.
Related papers
- Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems [8.633165810707315]
CERTAIN project aims to integrate regulatory compliance, ethical standards, and transparency into AI systems.<n>We outline the methodological steps for building the core components of this framework.<n>CERTAIN aims to advance regulatory compliance and to promote responsible AI innovation aligned with European standards.
arXiv Detail & Related papers (2025-09-30T08:54:02Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products [0.0]
This study adopts a bottom-up approach to explore how governance-relevant themes are expressed in user discourse.<n> Drawing on over 100,000 user reviews of AI products from G2.com, we apply BERTopic to extract latent themes and identify those most semantically related to AI governance.
arXiv Detail & Related papers (2025-05-30T01:33:21Z) - Toward Effective AI Governance: A Review of Principles [2.5411385112104448]
The aim of this study is to identify which frameworks, principles, mechanisms, and stakeholder roles are emphasized in secondary literature on AI governance.<n>The most cited frameworks include the EU AI Act and NIST RMF; transparency and accountability are the most common principles.
arXiv Detail & Related papers (2025-05-29T13:07:45Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - The Role of Legal Frameworks in Shaping Ethical Artificial Intelligence Use in Corporate Governance [0.0]
This article examines the evolving role of legal frameworks in shaping ethical artificial intelligence (AI) use in corporate governance.<n>It explores key legal and regulatory approaches aimed at promoting transparency, accountability, and fairness in corporate AI applications.
arXiv Detail & Related papers (2025-03-17T14:21:58Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards [1.9142148274342772]
The EU's AI Act will shift the focus of such organizations toward conformance with the technical requirements for regulatory compliance.
This paper offers a simple and repeatable mechanism for mapping the terms and requirements relevant to normative statements in regulations and standards.
arXiv Detail & Related papers (2024-08-21T18:21:09Z) - Resolving Ethics Trade-offs in Implementing Responsible AI [18.894725256708128]
We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex.<n>None of the approaches is likely to be appropriate for all organisations, systems, or applications.<n>We propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions.
arXiv Detail & Related papers (2024-01-16T04:14:23Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.