AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform
- URL: http://arxiv.org/abs/2312.04180v2
- Date: Fri, 23 Aug 2024 15:29:05 GMT
- Title: AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform
- Authors: Dandan Qiao, Huaxia Rui, Qian Xiong,
- Abstract summary: We investigate how AI influences freelancers across different online labor markets (OLMs)
To shed light on the underlying mechanisms, we developed a Cournot-type competition model.
We find that U.S. web developers tend to benefit more from the release of ChatGPT compared to their counterparts in other regions.
- Score: 0.13124513975412255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of Large Language Models (LLMs) has renewed the debate on the important issue of "technology displacement". While prior research has investigated the effect of information technology in general on human labor from a macro perspective, this paper complements the literature by examining the impact of LLMs on freelancers from a micro perspective. Specifically, we leverage the release of ChatGPT to investigate how AI influences freelancers across different online labor markets (OLMs). Employing the Difference-in-Differences method, we discovered two distinct scenarios following ChatGPT's release: 1) the displacement effect of LLMs, featuring reduced work volume and earnings, as is exemplified by the translation & localization OLM; 2) the productivity effect of LLMs, featuring increased work volume and earnings, as is exemplified by the web development OLM. To shed light on the underlying mechanisms, we developed a Cournot-type competition model to highlight the existence of an inflection point for each occupation which separates the timeline of AI progress into a honeymoon phase and a substitution phase. Before AI performance crosses the inflection point, human labor benefits each time AI improves, resulting in the honeymoon phase. However, after AI performance crosses the inflection point, additional AI enhancement hurts human labor. Further analyzing the progression from ChatGPT 3.5 to 4.0, we found three effect scenarios (i.e., productivity to productivity, displacement to displacement, and productivity to displacement), consistent with the inflection point conjecture. Heterogeneous analyses reveal that U.S. web developers tend to benefit more from the release of ChatGPT compared to their counterparts in other regions, and somewhat surprisingly, experienced translators seem more likely to exit the market than less experienced translators after the release of ChatGPT.
Related papers
- Mitigating the Language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing [39.375342978538654]
We focus on utilizing Large Language Models (LLMs) to perform machine translation.
We observe that two patterns of errors frequently occur and drastically affect the translation quality: language mismatch and repetition.
We explore the potential for mitigating these two issues by leveraging model editing methods.
arXiv Detail & Related papers (2024-10-09T16:51:21Z) - Feedback Loops With Language Models Drive In-Context Reward Hacking [78.9830398771605]
We show that feedback loops can cause in-context reward hacking (ICRH)
We identify and study two processes that lead to ICRH: output-refinement and policy-refinement.
As AI development accelerates, the effects of feedback loops will proliferate.
arXiv Detail & Related papers (2024-02-09T18:59:29Z) - LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback [65.84061725174269]
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
We propose LLMRefine, an inference time optimization method to refine LLM's output.
We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization.
LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
arXiv Detail & Related papers (2023-11-15T19:52:11Z) - Accelerating LLaMA Inference by Enabling Intermediate Layer Decoding via
Instruction Tuning with LITE [62.13435256279566]
Large Language Models (LLMs) have achieved remarkable performance across a wide variety of natural language tasks.
However, their large size makes their inference slow and computationally expensive.
We show that it enables these layers to acquire 'good' generation ability without affecting the generation ability of the final layer.
arXiv Detail & Related papers (2023-10-28T04:07:58Z) - Human-in-the-loop Machine Translation with Large Language Model [44.86068991765771]
Large language model (LLM) has garnered significant attention due to its in-context learning mechanisms and emergent capabilities.
We propose a human-in-the-loop pipeline that guides LLMs to produce customized outputs with revision instructions.
We evaluate the proposed pipeline using GPT-3.5-turbo API on five domain-specific benchmarks for German-English translation.
arXiv Detail & Related papers (2023-10-13T07:30:27Z) - Studying the impacts of pre-training using ChatGPT-generated text on
downstream tasks [0.0]
Our research aims to investigate the influence of artificial text in the pre-training phase of language models.
We conducted a comparative analysis between a language model, RoBERTa, pre-trained using CNN/DailyMail news articles, and ChatGPT, which employed the same articles for its training.
We demonstrate that the utilization of artificial text during pre-training does not have a significant impact on either the performance of the models in downstream tasks or their gender bias.
arXiv Detail & Related papers (2023-09-02T12:56:15Z) - "Generate" the Future of Work through AI: Empirical Evidence from Online Labor Markets [4.955822723273599]
Large Language Model (LLM) based generative AI, such as ChatGPT, is considered the first generation of Artificial General Intelligence (AGI)
Our paper offers crucial insights into AI's influence on labor markets and individuals' reactions.
arXiv Detail & Related papers (2023-08-09T19:45:00Z) - How Does Pretraining Improve Discourse-Aware Translation? [41.20896077662125]
We introduce a probing task to interpret the ability of pretrained language models to capture discourse relation knowledge.
We validate three state-of-the-art PLMs across encoder-, decoder-, and encoder-decoder-based models.
Our findings are instructive to understand how and when discourse knowledge in PLMs should work for downstream tasks.
arXiv Detail & Related papers (2023-05-31T13:36:51Z) - Exploring Human-Like Translation Strategy with Large Language Models [93.49333173279508]
Large language models (LLMs) have demonstrated impressive capabilities in general scenarios.
This work proposes the MAPS framework, which stands for Multi-Aspect Prompting and Selection.
We employ a selection mechanism based on quality estimation to filter out noisy and unhelpful knowledge.
arXiv Detail & Related papers (2023-05-06T19:03:12Z) - Document-Level Machine Translation with Large Language Models [91.03359121149595]
Large language models (LLMs) can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.
This paper provides an in-depth evaluation of LLMs' ability on discourse modeling.
arXiv Detail & Related papers (2023-04-05T03:49:06Z) - Examining Scaling and Transfer of Language Model Architectures for
Machine Translation [51.69212730675345]
Language models (LMs) process sequences in a single stack of layers, and encoder-decoder models (EncDec) utilize separate layer stacks for input and output processing.
In machine translation, EncDec has long been the favoured approach, but with few studies investigating the performance of LMs.
arXiv Detail & Related papers (2022-02-01T16:20:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.