Watching the Generative AI Hype Bubble Deflate
- URL: http://arxiv.org/abs/2408.08778v1
- Date: Fri, 16 Aug 2024 14:39:51 GMT
- Title: Watching the Generative AI Hype Bubble Deflate
- Authors: David Gray Widder, Mar Hicks,
- Abstract summary: As AI became a viral sensation, every business tried to become an AI business.
Some businesses added "AI" to their names to juice their stock prices, and companies talking about "AI" on their earnings calls saw similar increases.
While the Generative AI hype bubble is now slowly deflating, its harmful effects will last.
- Score: 0.6138671548064356
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Only a few short months ago, Generative AI was sold to us as inevitable by the leadership of AI companies, those who partnered with them, and venture capitalists. As certain elements of the media promoted and amplified these claims, public discourse online buzzed with what each new beta release could be made to do with a few simple prompts. As AI became a viral sensation, every business tried to become an AI business. Some businesses added "AI" to their names to juice their stock prices, and companies talking about "AI" on their earnings calls saw similar increases. While the Generative AI hype bubble is now slowly deflating, its harmful effects will last.
Related papers
- Gaming the Arena: AI Model Evaluation and the Viral Capture of Attention [0.0]
I examine the rise of so-called 'arenas' in which AI models are evaluated with reference to gladiatorial-style 'battles'<n>I argue that the arena-ization is being powered by a 'viral' desire to capture attention both in, and outside of, the AI community.
arXiv Detail & Related papers (2025-12-17T09:50:13Z) - Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis [57.68073583427415]
We study whether media coverage has the potential to push AI creators into the production of safe products.<n>Our results reveal that media is indeed able to foster cooperation between creators and users, but not always.<n>By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator.
arXiv Detail & Related papers (2025-09-02T12:13:34Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.
Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.
We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
Misclassification can lead to false plagiarism accusations and misleading claims about AI prevalence in online content.
We systematically evaluate eleven state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.
Our findings reveal that detectors frequently misclassify even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships [0.5699788926464752]
We use Replika AI, a popular US-based AI companion, to shed light on these questions.
We find that, after the app removed its erotic role play (ERP) feature, this event triggered perceptions in customers that their AI companion's identity had discontinued.
This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the "new" AI relative to the "original"
arXiv Detail & Related papers (2024-12-10T20:14:10Z) - Shaping AI's Impact on Billions of Lives [27.78474296888659]
We argue for the community of AI practitioners to consciously and proactively work for the common good.
This paper offers a blueprint for a new type of innovation infrastructure.
arXiv Detail & Related papers (2024-12-03T16:29:37Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - The Impact of Generative Artificial Intelligence on Market Equilibrium: Evidence from a Natural Experiment [19.963531237647103]
Generative artificial intelligence (AI) exhibits the capability to generate creative content akin to human output with greater efficiency and reduced costs.
This paper empirically investigates the impact of generative AI on market equilibrium, in the context of China's leading art outsourcing platform.
Our analysis shows that the advent of generative AI led to a 64% reduction in average prices, yet it simultaneously spurred a 121% increase in order volume and a 56% increase in overall revenue.
arXiv Detail & Related papers (2023-11-13T04:31:53Z) - Public Perception of Generative AI on Twitter: An Empirical Study Based
on Occupation and Usage [7.18819534653348]
This paper investigates users' perceptions of generative AI using 3M posts on Twitter from January 2019 to March 2023.
We find that people across various occupations, not just IT-related ones, show a strong interest in generative AI.
After the release of ChatGPT, people's interest in AI in general has increased dramatically.
arXiv Detail & Related papers (2023-05-16T15:30:12Z) - Governance of the AI, by the AI, and for the AI [9.653656920225858]
Authors believe the age of artificial intelligence has, indeed, finally arrived.
Current state of AI is ushering in profound changes to many different sectors of society.
arXiv Detail & Related papers (2023-05-04T03:29:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - "Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest
Value for us? [4.497097230665825]
Many researchers argue that little substantial progress has been made for AI in recent decades.
Author explains why controversies about AI exist; (2) discriminates two paradigms of AI research, termed "weak AI" and "strong AI"
arXiv Detail & Related papers (2021-03-29T02:57:48Z) - A clarification of misconceptions, myths and desired status of
artificial intelligence [0.0]
We present a perspective on the desired and current status of AI in relation to machine learning and statistics.
Our discussion is intended to uncurtain the veil of vagueness surrounding AI to see its true countenance.
arXiv Detail & Related papers (2020-08-03T17:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.