Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI
- URL: http://arxiv.org/abs/2409.14160v1
- Date: Sat, 21 Sep 2024 14:43:54 GMT
- Title: Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI
- Authors: Gaƫl Varoquaux, Alexandra Sasha Luccioni, Meredith Whittaker,
- Abstract summary: We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
- Score: 67.58673784790375
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the growing attention and investment in recent AI approaches such as large language models, the narrative that the larger the AI system the more valuable, powerful and interesting it is is increasingly seen as common sense. But what is this assumption based on, and how are we measuring value, power, and performance? And what are the collateral consequences of this race to ever-increasing scale? Here, we scrutinize the current scaling trends and trade-offs across multiple axes and refute two common assumptions underlying the 'bigger-is-better' AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its applications throughout society.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Divide-and-Conquer Dynamics in AI-Driven Disempowerment [9.204894568267013]
We construct a game-theoretic model of conflict to study the causes and consequences of infighting between those who prioritize current harms and future harms.
Our model also helps explain why throughout history, stakeholders sharing a common threat have found it advantageous to unite against it.
arXiv Detail & Related papers (2023-10-09T17:59:26Z) - Evaluating and Improving Value Judgments in AI: A Scenario-Based Study
on Large Language Models' Depiction of Social Conventions [5.457150493905063]
We evaluate how contemporary AI services competitively meet user needs, then examined society's depiction as mirrored by Large Language Models.
We suggest a model of decision-making in value-conflicting scenarios which could be adopted for future machine value judgments.
This paper advocates for a practical approach to using AI as a tool for investigating other remote worlds.
arXiv Detail & Related papers (2023-10-04T08:42:02Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Examining the Differential Risk from High-level Artificial Intelligence
and the Question of Control [0.0]
The extent and scope of future AI capabilities remain a key uncertainty.
There are concerns over the extent of integration and oversight of AI opaque decision processes.
This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis.
arXiv Detail & Related papers (2022-11-06T15:46:02Z) - Compute and Energy Consumption Trends in Deep Learning Inference [67.32875669386488]
We study relevant models in the areas of computer vision and natural language processing.
For a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated.
arXiv Detail & Related papers (2021-09-12T09:40:18Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Towards a framework for understanding societal and ethical implications
of Artificial Intelligence [2.28438857884398]
The objective of this paper is to identify the main societal and ethical challenges implied by a massive uptake of AI.
We have surveyed the literature for the most common challenges and classified them in seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4) Relation people-robots, 5) Concentration of power and wealth, 6) Intentional bad uses, and 7) AI for weapons and warfare.
arXiv Detail & Related papers (2020-01-03T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.