Large language models can enhance persuasion through linguistic feature
alignment
- URL: http://arxiv.org/abs/2311.16466v2
- Date: Mon, 12 Feb 2024 16:20:45 GMT
- Title: Large language models can enhance persuasion through linguistic feature
alignment
- Authors: Minkyu Shin and Jin Kim
- Abstract summary: We investigate the impact of large language models (LLMs) on human communication using data on consumer complaints in the financial industry.
We find a sharp increase in the likely use of LLMs shortly after the release of ChatGPT.
Computational linguistic analyses suggest that the positive correlation may be explained by LLMs' enhancement of various linguistic features.
- Score: 3.054681017071983
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Although large language models (LLMs) are reshaping various aspects of human
life, our current understanding of their impacts remains somewhat constrained.
Here we investigate the impact of LLMs on human communication, using data on
consumer complaints in the financial industry. By employing an AI detection
tool on more than 820K complaints gathered by the Consumer Financial Protection
Bureau (CFPB), we find a sharp increase in the likely use of LLMs shortly after
the release of ChatGPT. Moreover, the likely LLM usage was positively
correlated with message persuasiveness (i.e., increased likelihood of obtaining
relief from financial firms). Computational linguistic analyses suggest that
the positive correlation may be explained by LLMs' enhancement of various
linguistic features. Based on the results of these observational studies, we
hypothesize that LLM usage may enhance a comprehensive set of linguistic
features, increasing message persuasiveness to receivers with heterogeneous
linguistic preferences (i.e., linguistic feature alignment). We test this
hypothesis in preregistered experiments and find support for it. As an instance
of early empirical demonstrations of LLM usage for enhancing persuasion, our
research highlights the transformative potential of LLMs in human
communication.
Related papers
- Semantic Change Characterization with LLMs using Rhetorics [0.1474723404975345]
We investigate the potential of LLMs in characterizing three types of semantic change: thought, relation, and orientation.
Our results highlight the effectiveness of LLMs in capturing and analyzing semantic changes, providing valuable insights to improve computational linguistic applications.
arXiv Detail & Related papers (2024-07-23T16:32:49Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Advancing Annotation of Stance in Social Media Posts: A Comparative Analysis of Large Language Models and Crowd Sourcing [2.936331223824117]
Large Language Models (LLMs) for automated text annotation in social media posts has garnered significant interest.
We analyze the performance of eight open-source and proprietary LLMs for annotating the stance expressed in social media posts.
A significant finding of our study is that the explicitness of text expressing a stance plays a critical role in how faithfully LLMs' stance judgments match humans'
arXiv Detail & Related papers (2024-06-11T17:26:07Z) - "I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust [51.542856739181474]
We show how different natural language expressions of uncertainty impact participants' reliance, trust, and overall task performance.
We find that first-person expressions decrease participants' confidence in the system and tendency to agree with the system's answers, while increasing participants' accuracy.
Our findings suggest that using natural language expressions of uncertainty may be an effective approach for reducing overreliance on LLMs, but that the precise language used matters.
arXiv Detail & Related papers (2024-05-01T16:43:55Z) - Can Language Models Recognize Convincing Arguments? [12.458437450959416]
Large Language Models (LLMs) have raised concerns about their potential misuse for creating personalized, convincing misinformation and propaganda.
We study their performance on the related task of detecting convincing arguments.
We show that LLMs perform on par with humans in these tasks and that combining predictions from different LLMs yields significant performance gains.
arXiv Detail & Related papers (2024-03-31T17:38:33Z) - The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition [74.04775677110179]
In-context Learning (ICL) has emerged as a powerful paradigm for performing natural language tasks with Large Language Models (LLM)
We show that LLMs have strong yet inconsistent priors in emotion recognition that ossify their predictions.
Our results suggest that caution is needed when using ICL with larger LLMs for affect-centered tasks outside their pre-training domain.
arXiv Detail & Related papers (2024-03-25T19:07:32Z) - Beware of Words: Evaluating the Lexical Richness of Conversational Large
Language Models [3.0059120458540383]
We consider the evaluation of the lexical richness of the text generated by conversational Large Language Models (LLMs) and how it depends on the model parameters.
The results show how lexical richness depends on the version of ChatGPT and some of its parameters, such as the presence penalty, or on the role assigned to the model.
arXiv Detail & Related papers (2024-02-11T13:41:17Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Are Large Language Models Really Robust to Word-Level Perturbations? [68.60618778027694]
We propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools.
Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions.
Our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage.
arXiv Detail & Related papers (2023-09-20T09:23:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.