The (Short-Term) Effects of Large Language Models on Unemployment and Earnings
- URL: http://arxiv.org/abs/2509.15510v1
- Date: Fri, 19 Sep 2025 01:20:28 GMT
- Title: The (Short-Term) Effects of Large Language Models on Unemployment and Earnings
- Authors: Danqing Chen, Carina Kane, Austin Kozlowski, Nadav Kunievsky, James A. Evans,
- Abstract summary: Large Language Models have spread rapidly since the release of ChatGPT in late 2022, accompanied by claims of major productivity gains but also concerns about job displacement.<n>This paper examines the short-run labor market effects of LLM adoption by comparing earnings and unemployment across occupations with differing levels of exposure to these technologies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models have spread rapidly since the release of ChatGPT in late 2022, accompanied by claims of major productivity gains but also concerns about job displacement. This paper examines the short-run labor market effects of LLM adoption by comparing earnings and unemployment across occupations with differing levels of exposure to these technologies. Using a Synthetic Difference in Differences approach, we estimate the impact of LLM exposure on earnings and unemployment. Our findings show that workers in highly exposed occupations experienced earnings increases following ChatGPT's introduction, while unemployment rates remained unchanged. These results suggest that initial labor market adjustments to LLMs operate primarily through earnings rather than worker reallocation.
Related papers
- Can Online GenAI Discussion Serve as Bellwether for Labor Market Shifts? [62.386835769570006]
This paper examines whether online discussions about Large Language Models can function as early indicators of labor market shifts.<n>We employ four distinct analytical approaches to identify the domains and timeframes in which public discourse serves as a leading signal for employment changes.<n>Our findings reveal that discussion intensity predicts employment changes 1-7 months in advance across multiple indicators, including job postings, net hiring rates, tenure patterns, and unemployment duration.
arXiv Detail & Related papers (2025-11-20T04:18:25Z) - How AI Forecasts AI Jobs: Benchmarking LLM Predictions of Labor Market Changes [5.848712585343904]
This paper introduces a benchmark for evaluating how well large language models (LLMs) can anticipate changes in job demand.<n>Our benchmark combines two datasets: a high-frequency index of sector-level job postings in the United States, and a global dataset of projected occupational changes due to AI adoption.<n>Results show that structured task prompts consistently improve forecast stability, while persona prompts offer advantages on short-term trends.
arXiv Detail & Related papers (2025-10-27T14:08:27Z) - The Other Side of the Coin: Exploring Fairness in Retrieval-Augmented Generation [73.16564415490113]
Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by retrieving relevant document from external knowledge sources.<n>We propose two approaches, FairFT and FairFilter, to mitigate the fairness issues introduced by RAG for small-scale LLMs.
arXiv Detail & Related papers (2025-04-11T10:17:10Z) - Why Does the Effective Context Length of LLMs Fall Short? [68.34573617977013]
In this work, we introduce ShifTed Rotray position embeddING (STRING)
STRING shifts well-trained positions to overwrite the original ineffective positions during inference, enhancing performance within their existing training lengths.
Experimental results show that STRING dramatically improves the performance of the latest large-scale models.
arXiv Detail & Related papers (2024-10-24T13:51:50Z) - Transforming Scholarly Landscapes: Influence of Large Language Models on Academic Fields beyond Computer Science [77.31665252336157]
Large Language Models (LLMs) have ushered in a transformative era in Natural Language Processing (NLP)
This work empirically examines the influence and use of LLMs in fields beyond NLP.
arXiv Detail & Related papers (2024-09-29T01:32:35Z) - From Pre-training Corpora to Large Language Models: What Factors Influence LLM Performance in Causal Discovery Tasks? [51.42906577386907]
This study explores the factors influencing the performance of Large Language Models (LLMs) in causal discovery tasks.
A higher frequency of causal mentions correlates with better model performance, suggesting that extensive exposure to causal information during training enhances the models' causal discovery capabilities.
arXiv Detail & Related papers (2024-07-29T01:45:05Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) are used to automate decision-making tasks.<n>In this paper, we evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.<n>We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types.<n>These benchmarks allow us to isolate the ability of LLMs to accurately predict changes resulting from their ability to memorize facts or find other shortcuts.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition [74.04775677110179]
In-context Learning (ICL) has emerged as a powerful paradigm for performing natural language tasks with Large Language Models (LLM)
We show that LLMs have strong yet inconsistent priors in emotion recognition that ossify their predictions.
Our results suggest that caution is needed when using ICL with larger LLMs for affect-centered tasks outside their pre-training domain.
arXiv Detail & Related papers (2024-03-25T19:07:32Z) - Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue [122.20016030723043]
We evaluate the side effects of model editing on large language models (LLMs)
Our analysis reveals that the side effects are caused by model editing altering the original model weights excessively.
To mitigate this, a method named RECT is proposed to regularize the edit update weights.
arXiv Detail & Related papers (2024-01-09T18:03:15Z) - AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform [0.13124513975412255]
We investigate how AI influences freelancers across different online labor markets (OLMs)
To shed light on the underlying mechanisms, we developed a Cournot-type competition model.
We find that U.S. web developers tend to benefit more from the release of ChatGPT compared to their counterparts in other regions.
arXiv Detail & Related papers (2023-12-07T10:06:34Z) - Large Language Models at Work in China's Labor Market [3.9145097124275257]
This paper explores the potential impacts of large language models (LLMs) on the Chinese labor market.<n>The results indicate a positive correlation between occupational exposure and both wage levels and experience premiums at the occupation level.<n>We then aggregate occupational exposure at the industry level to obtain industrial exposure scores.
arXiv Detail & Related papers (2023-08-17T04:20:36Z) - "Generate" the Future of Work through AI: Empirical Evidence from Online Labor Markets [4.955822723273599]
Large Language Model (LLM)-based generative AI systems, such as ChatGPT, demonstrate zero-shot learning capabilities across a wide range of downstream tasks.<n>These systems are poised to reshape labor market dynamics.<n>However, predicting their precise impact is challenging, given AI's simultaneous effects on both demand and supply.
arXiv Detail & Related papers (2023-08-09T19:45:00Z) - GPTs are GPTs: An Early Look at the Labor Market Impact Potential of
Large Language Models [14.639532188126664]
We investigate the potential implications of large language models (LLMs) on the U.S. labor market.
Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected.
arXiv Detail & Related papers (2023-03-17T17:15:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.