The unsuitability of existing regulations to reach sustainable AI
- URL: http://arxiv.org/abs/2601.04958v1
- Date: Thu, 08 Jan 2026 14:02:51 GMT
- Title: The unsuitability of existing regulations to reach sustainable AI
- Authors: Thomas Le Goff,
- Abstract summary: We argue that, despite incremental progress, current approaches remain ill-suited to correcting the market failures underpinning AI-related energy use, water consumption, and material demand.<n>The analysis situates these regulatory gaps within a wider ecosystem of academic research, civil society advocacy, standard-setting, and industry initiatives.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper examines the European Union's emerging regulatory landscape - focusing on the AI Act, corporate sustainability reporting and due diligence regimes (CSRD and CSDDD), and data center regulation - to assess whether it can effectively govern AI's environmental footprint. We argue that, despite incremental progress, current approaches remain ill-suited to correcting the market failures underpinning AI-related energy use, water consumption, and material demand. Key shortcomings include narrow disclosure requirements, excessive reliance on voluntary standards, weak enforcement mechanisms, and a structural disconnect between AI-specific impacts and broader sustainability laws. The analysis situates these regulatory gaps within a wider ecosystem of academic research, civil society advocacy, standard-setting, and industry initiatives, highlighting risks of regulatory capture and greenwashing. Building on this diagnosis, the paper advances strategic recommendations for the COP30 Action Agenda, calling for binding transparency obligations, harmonized international standards for lifecycle assessment, stricter governance of data center expansion, and meaningful public participation in AI infrastructure decisions.
Related papers
- Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Benchmarking is Broken -- Don't Let AI be its Own Judge [22.93026946593552]
We argue that the current laissez-faire approach to evaluating AI is unsustainable.<n>We introduce PeerBench, a community-governed, proctored evaluation blueprint.<n>Our goal is to pave the way for evaluations that can restore integrity and deliver genuinely trustworthy measures of AI progress.
arXiv Detail & Related papers (2025-10-08T21:41:37Z) - Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems [8.633165810707315]
CERTAIN project aims to integrate regulatory compliance, ethical standards, and transparency into AI systems.<n>We outline the methodological steps for building the core components of this framework.<n>CERTAIN aims to advance regulatory compliance and to promote responsible AI innovation aligned with European standards.
arXiv Detail & Related papers (2025-09-30T08:54:02Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - AIJIM: A Scalable Model for Real-Time AI in Environmental Journalism [0.0]
AIJIM is a framework for integrating real-time AI into environmental journalism.<n>It was validated in a 2024 pilot on the island of Mallorca.<n>It achieved 85.4% detection accuracy and 89.7% agreement with expert annotations.
arXiv Detail & Related papers (2025-03-19T19:00:24Z) - Regulating Ai In Financial Services: Legal Frameworks And Compliance Challenges [0.0]
Article examines the evolving landscape of artificial intelligence (AI) regulation in financial services.<n>It highlights how AI-driven processes, from fraud detection to algorithmic trading, offer efficiency gains yet introduce significant risks.<n>The study compares regulatory approaches across major jurisdictions such as the European Union, United States, and United Kingdom.
arXiv Detail & Related papers (2025-03-17T14:29:09Z) - The Role of Legal Frameworks in Shaping Ethical Artificial Intelligence Use in Corporate Governance [0.0]
This article examines the evolving role of legal frameworks in shaping ethical artificial intelligence (AI) use in corporate governance.<n>It explores key legal and regulatory approaches aimed at promoting transparency, accountability, and fairness in corporate AI applications.
arXiv Detail & Related papers (2025-03-17T14:21:58Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Responsible AI Governance: A Response to UN Interim Report on Governing AI for Humanity [15.434533537570614]
The report emphasizes the transformative potential of AI in achieving the Sustainable Development Goals.<n>It acknowledges the need for robust governance to mitigate associated risks.<n>The report concludes with actionable principles for fostering responsible AI governance.
arXiv Detail & Related papers (2024-11-29T18:57:24Z) - Developing and Deploying Industry Standards for Artificial Intelligence in Education (AIED): Challenges, Strategies, and Future Directions [22.65961106637345]
The adoption of Artificial Intelligence in Education (AIED) holds the promise of revolutionizing educational practices.
The lack of standardized practices in the development and deployment of AIED solutions has led to fragmented ecosystems.
This article aims to address the critical need to develop and implement industry standards in AIED.
arXiv Detail & Related papers (2024-03-13T22:38:08Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.