Sustainable AI Regulation
- URL: http://arxiv.org/abs/2306.00292v4
- Date: Wed, 6 Mar 2024 16:57:25 GMT
- Title: Sustainable AI Regulation
- Authors: Philipp Hacker
- Abstract summary: The ICT sector contributes up to 3.9 percent of global greenhouse gas emissions.
The carbon footprint water consumption of AI, especially large-scale generative models like GPT-4, raise significant sustainability concerns.
The paper suggests a multi-faceted approach to achieve sustainable AI regulation.
- Score: 3.0821115746307663
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current proposals for AI regulation, in the EU and beyond, aim to spur AI
that is trustworthy (e.g., AI Act) and accountable (e.g., AI Liability) What is
missing, however, is a robust regulatory discourse and roadmap to make AI, and
technology more broadly, environmentally sustainable. This paper aims to take
first steps to fill this gap. The ICT sector contributes up to 3.9 percent of
global greenhouse gas (GHG) emissions-more than global air travel at 2.5
percent. The carbon footprint and water consumption of AI, especially
large-scale generative models like GPT-4, raise significant sustainability
concerns. The paper is the first to assess how current and proposed technology
regulations, including EU environmental law, the General Data Protection
Regulation (GDPR), and the AI Act, could be adjusted to better account for
environmental sustainability. The GDPR, for instance, could be interpreted to
limit certain individual rights like the right to erasure if these rights
significantly conflict with broader sustainability goals. In a second step, the
paper suggests a multi-faceted approach to achieve sustainable AI regulation.
It advocates for transparency mechanisms, such as disclosing the GHG footprint
of AI systems, as laid out in the proposed EU AI Act. However, sustainable AI
regulation must go beyond mere transparency. The paper proposes a regulatory
toolkit comprising co-regulation, sustainability-by-design principles,
restrictions on training data, and consumption caps, including integration into
the EU Emissions Trading Scheme. Finally, the paper argues that this regulatory
toolkit could serve as a blueprint for regulating other high-emission
technologies and infrastructures like blockchain, Metaverse applications, and
data centers. The framework aims to cohesively address the crucial dual
challenges of our era: digital transformation and climate change mitigation.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - AI, Climate, and Regulation: From Data Centers to the AI Act [2.874893537471256]
We aim to provide guidance on the climate-related regulation for data centers and AI specifically.
We propose a specific interpretation of the AI Act to bring reporting on the previously unadressed energy consumption from AI inferences back into the scope.
We argue for an interpretation of the AI Act that includes environmental concerns in the mandatory risk assessment.
arXiv Detail & Related papers (2024-10-09T08:43:53Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Integrating AI's Carbon Footprint into Risk Management Frameworks: Strategies and Tools for Sustainable Compliance in Banking Sector [0.0]
This paper examines the integration of AI's carbon footprint into the risk management frameworks (RMFs) of the banking sector.
Recent advancements in AI research, like the Open Mixture-of-Experts (OLMoE) framework, offer more efficient and dynamic AI models.
Using these technological examples, the paper outlines a structured approach for banks to identify, assess, and mitigate AI's carbon footprint.
arXiv Detail & Related papers (2024-09-15T23:09:27Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Recommendations for public action towards sustainable generative AI
systems [0.0]
This paper presents the components of the environmental footprint of generative AI.
It highlights the massive CO2 emissions and water consumption associated with training large language models.
The paper also explores the factors and characteristics of models that have an influence on their environmental footprint.
arXiv Detail & Related papers (2024-01-04T08:55:53Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - The European AI Liability Directives -- Critique of a Half-Hearted
Approach and Lessons for the Future [0.0]
The European Commission advanced two proposals outlining the European approach to AI liability in September 2022.
The latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment.
Taken together, these acts may well trigger a Brussels Effect in AI regulation, with significant consequences for the US and beyond.
I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime.
arXiv Detail & Related papers (2022-11-25T09:08:11Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.