AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online
Labor Platform
- URL: http://arxiv.org/abs/2312.04180v1
- Date: Thu, 7 Dec 2023 10:06:34 GMT
- Title: AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online
Labor Platform
- Authors: Dandan Qiao, Huaxia Rui, and Qian Xiong
- Abstract summary: We examine the performance of a statistical AI in a human task through the lens of four factors.
We develop a simple economic model of competition to show the existence of an inflection point for each occupation.
To offer empirical evidence, we first argue that AI performance has passed the inflection point for the occupation of translation but not for the occupation of web development.
- Score: 0.13124513975412255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) refers to the ability of machines or software to
mimic or even surpass human intelligence in a given cognitive task. While
humans learn by both induction and deduction, the success of current AI is
rooted in induction, relying on its ability to detect statistical regularities
in task input -- an ability learnt from a vast amount of training data using
enormous computation resources. We examine the performance of such a
statistical AI in a human task through the lens of four factors, including task
learnability, statistical resource, computation resource, and learning
techniques, and then propose a three-phase visual framework to understand the
evolving relation between AI and jobs. Based on this conceptual framework, we
develop a simple economic model of competition to show the existence of an
inflection point for each occupation. Before AI performance crosses the
inflection point, human workers always benefit from an improvement in AI
performance, but after the inflection point, human workers become worse off
whenever such an improvement occurs. To offer empirical evidence, we first
argue that AI performance has passed the inflection point for the occupation of
translation but not for the occupation of web development. We then study how
the launch of ChatGPT, which led to significant improvement of AI performance
on many tasks, has affected workers in these two occupations on a large online
labor platform. Consistent with the inflection point conjecture, we find that
translators are negatively affected by the shock both in terms of the number of
accepted jobs and the earnings from those jobs, while web developers are
positively affected by the very same shock. Given the potentially large
disruption of AI on employment, more studies on more occupations using data
from different platforms are urgently needed.
Related papers
- An Empirical Investigation of Gender Stereotype Representation in Large Language Models: The Italian Case [0.41942958779358674]
This study examines in which manner Large Language Models shape responses to ungendered prompts, contributing to biased outputs.<n>The results highlight how content generated by LLMs can perpetuate stereotypes.<n>The presence of bias in AI-generated text can have significant implications in many fields, such as in the workplaces or in job selections.
arXiv Detail & Related papers (2025-07-25T10:57:29Z) - Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different Languages [51.96666324242191]
We analyze whether user utilization of novel writing assistants in a charity advertisement writing task is affected by the AI's performance in a second language.
We quantify the extent to which these patterns translate into the persuasiveness of generated charity advertisements.
arXiv Detail & Related papers (2025-02-13T17:49:30Z) - LLMs are Imperfect, Then What? An Empirical Study on LLM Failures in Software Engineering [38.20696656193963]
We conducted an observational study with 22 participants using ChatGPT as a coding assistant in a non-trivial software engineering task.
We identified the cases where ChatGPT failed, their root causes, and the corresponding mitigation solutions used by users.
arXiv Detail & Related papers (2024-11-15T03:29:41Z) - Mitigating the Language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing [39.375342978538654]
We focus on utilizing Large Language Models (LLMs) to perform machine translation.
We observe that two patterns of errors frequently occur and drastically affect the translation quality: language mismatch and repetition.
We explore the potential for mitigating these two issues by leveraging model editing methods.
arXiv Detail & Related papers (2024-10-09T16:51:21Z) - Feedback Loops With Language Models Drive In-Context Reward Hacking [78.9830398771605]
We show that feedback loops can cause in-context reward hacking (ICRH)
We identify and study two processes that lead to ICRH: output-refinement and policy-refinement.
As AI development accelerates, the effects of feedback loops will proliferate.
arXiv Detail & Related papers (2024-02-09T18:59:29Z) - LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback [65.84061725174269]
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
We propose LLMRefine, an inference time optimization method to refine LLM's output.
We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization.
LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
arXiv Detail & Related papers (2023-11-15T19:52:11Z) - Accelerating LLaMA Inference by Enabling Intermediate Layer Decoding via
Instruction Tuning with LITE [62.13435256279566]
Large Language Models (LLMs) have achieved remarkable performance across a wide variety of natural language tasks.
However, their large size makes their inference slow and computationally expensive.
We show that it enables these layers to acquire 'good' generation ability without affecting the generation ability of the final layer.
arXiv Detail & Related papers (2023-10-28T04:07:58Z) - Studying the impacts of pre-training using ChatGPT-generated text on
downstream tasks [0.0]
Our research aims to investigate the influence of artificial text in the pre-training phase of language models.
We conducted a comparative analysis between a language model, RoBERTa, pre-trained using CNN/DailyMail news articles, and ChatGPT, which employed the same articles for its training.
We demonstrate that the utilization of artificial text during pre-training does not have a significant impact on either the performance of the models in downstream tasks or their gender bias.
arXiv Detail & Related papers (2023-09-02T12:56:15Z) - "Generate" the Future of Work through AI: Empirical Evidence from Online Labor Markets [4.955822723273599]
Large Language Model (LLM) based generative AI, such as ChatGPT, is considered the first generation of Artificial General Intelligence (AGI)
Our paper offers crucial insights into AI's influence on labor markets and individuals' reactions.
arXiv Detail & Related papers (2023-08-09T19:45:00Z) - How Does Pretraining Improve Discourse-Aware Translation? [41.20896077662125]
We introduce a probing task to interpret the ability of pretrained language models to capture discourse relation knowledge.
We validate three state-of-the-art PLMs across encoder-, decoder-, and encoder-decoder-based models.
Our findings are instructive to understand how and when discourse knowledge in PLMs should work for downstream tasks.
arXiv Detail & Related papers (2023-05-31T13:36:51Z) - Exploring Human-Like Translation Strategy with Large Language Models [93.49333173279508]
Large language models (LLMs) have demonstrated impressive capabilities in general scenarios.
This work proposes the MAPS framework, which stands for Multi-Aspect Prompting and Selection.
We employ a selection mechanism based on quality estimation to filter out noisy and unhelpful knowledge.
arXiv Detail & Related papers (2023-05-06T19:03:12Z) - Document-Level Machine Translation with Large Language Models [91.03359121149595]
Large language models (LLMs) can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.
This paper provides an in-depth evaluation of LLMs' ability on discourse modeling.
arXiv Detail & Related papers (2023-04-05T03:49:06Z) - Examining Scaling and Transfer of Language Model Architectures for
Machine Translation [51.69212730675345]
Language models (LMs) process sequences in a single stack of layers, and encoder-decoder models (EncDec) utilize separate layer stacks for input and output processing.
In machine translation, EncDec has long been the favoured approach, but with few studies investigating the performance of LMs.
arXiv Detail & Related papers (2022-02-01T16:20:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.