Misinformation by Omission: The Need for More Environmental Transparency in AI
- URL: http://arxiv.org/abs/2506.15572v1
- Date: Wed, 18 Jun 2025 15:49:22 GMT
- Title: Misinformation by Omission: The Need for More Environmental Transparency in AI
- Authors: Sasha Luccioni, Boris Gamazaychikov, Theo Alves da Costa, Emma Strubell,
- Abstract summary: We explore myths and misconceptions shaping public understanding of AI's environmental impacts.<n>We discuss the importance of data transparency in clarifying misconceptions and mitigating these harms.
- Score: 9.456892974946884
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In recent years, Artificial Intelligence (AI) models have grown in size and complexity, driving greater demand for computational power and natural resources. In parallel to this trend, transparency around the costs and impacts of these models has decreased, meaning that the users of these technologies have little to no information about their resource demands and subsequent impacts on the environment. Despite this dearth of adequate data, escalating demand for figures quantifying AI's environmental impacts has led to numerous instances of misinformation evolving from inaccurate or de-contextualized best-effort estimates of greenhouse gas emissions. In this article, we explore pervasive myths and misconceptions shaping public understanding of AI's environmental impacts, tracing their origins and their spread in both the media and scientific publications. We discuss the importance of data transparency in clarifying misconceptions and mitigating these harms, and conclude with a set of recommendations for how AI developers and policymakers can leverage this information to mitigate negative impacts in the future.
Related papers
- Responsible Data Stewardship: Generative AI and the Digital Waste Problem [0.0]
generative AI systems enable unprecedented creation levels of synthetic data across text, images, audio, and video modalities.<n>Digital waste refers to stored data that consumes resources without serving a specific (and/or immediate) purpose.<n>This paper introduces digital waste as an ethical imperative within (generative) AI development, positioning environmental sustainability as core for responsible innovation.
arXiv Detail & Related papers (2025-05-27T20:07:22Z) - Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.<n>We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.<n>Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.<n>We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.<n>We contend that a narrow focus on direct emissions misrepresents AI's true climate footprint, limiting the scope for meaningful interventions.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - Data and System Perspectives of Sustainable Artificial Intelligence [43.21672481390316]
Sustainable AI is a subfield of AI for aiming to reduce environmental impact and achieve sustainability.<n>In this article, we discuss current issues, opportunities and example solutions for addressing these issues.
arXiv Detail & Related papers (2025-01-13T17:04:23Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Towards A Comprehensive Assessment of AI's Environmental Impact [0.5982922468400899]
Recent surge of interest in machine learning has sparked a trend towards large-scale adoption of AI/ML.
There is a need for a framework that monitors the environmental impact and degradation from AI/ML throughout its lifecycle.
This study proposes a methodology to track environmental variables relating to the multifaceted impact of AI around datacenters using openly available energy data and globally acquired satellite observations.
arXiv Detail & Related papers (2024-05-22T21:19:35Z) - When AI Eats Itself: On the Caveats of AI Autophagy [18.641925577551557]
The AI autophagy phenomenon suggests a future where generative AI systems may increasingly consume their own outputs without discernment.
This study examines the existing literature, delving into the consequences of AI autophagy, analyzing the associated risks, and exploring strategies to mitigate its impact.
arXiv Detail & Related papers (2024-05-15T13:50:23Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.