Co-Writing with Opinionated Language Models Affects Users' Views
- URL: http://arxiv.org/abs/2302.00560v1
- Date: Wed, 1 Feb 2023 16:26:32 GMT
- Title: Co-Writing with Opinionated Language Models Affects Users' Views
- Authors: Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, Mor
Naaman
- Abstract summary: This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write.
Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society.
Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey.
- Score: 27.456483236562434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: If large language models like GPT-3 preferably produce a particular point of
view, they may influence people's opinions on an unknown scale. This study
investigates whether a language-model-powered writing assistant that generates
some opinions more often than others impacts what users write - and what they
think. In an online experiment, we asked participants (N=1,506) to write a post
discussing whether social media is good for society. Treatment group
participants used a language-model-powered writing assistant configured to
argue that social media is good or bad for society. Participants then completed
a social media attitude survey, and independent judges (N=500) evaluated the
opinions expressed in their writing. Using the opinionated language model
affected the opinions expressed in participants' writing and shifted their
opinions in the subsequent attitude survey. We discuss the wider implications
of our results and argue that the opinions built into AI language technologies
need to be monitored and engineered more carefully.
Related papers
- Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Simulating Social Media Using Large Language Models to Evaluate
Alternative News Feed Algorithms [8.602553195689513]
Social media is often criticized for amplifying toxic discourse and discouraging constructive conversations.
This paper asks whether simulating social media can help researchers study how different news feed algorithms shape the quality of online conversations.
arXiv Detail & Related papers (2023-10-05T18:26:06Z) - Fostering User Engagement in the Critical Reflection of Arguments [3.26297440422721]
We propose a system that engages in a deliberative dialogue with a human.
We enable the system to intervene if the user is too focused on their pre-existing opinion.
We report on a user study with 58 participants to test our model and the effect of the intervention mechanism.
arXiv Detail & Related papers (2023-08-17T15:48:23Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion [52.49981085431061]
Enhancing people's ability to write persuasive arguments could contribute to the effectiveness and civility in online communication.
We derived four design goals for a tool that helps users improve the persuasiveness of arguments in online discussions.
Persua is an interactive visual system that provides example-based guidance on persuasive strategies to enhance the persuasiveness of arguments.
arXiv Detail & Related papers (2022-04-16T08:07:53Z) - Whose Opinions Matter? Perspective-aware Models to Identify Opinions of
Hate Speech Victims in Abusive Language Detection [6.167830237917662]
We present an in-depth study to model polarized opinions coming from different communities.
We believe that by relying on this information, we can divide the annotators into groups sharing similar perspectives.
We propose a novel resource, a multi-perspective English language dataset annotated according to different sub-categories relevant for characterising online abuse.
arXiv Detail & Related papers (2021-06-30T08:35:49Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - The Impact of Multiple Parallel Phrase Suggestions on Email Input and
Composition Behaviour of Native and Non-Native English Writers [15.621144215664767]
We build a text editor prototype with a neural language model (GPT-2), refined in a prestudy with 30 people.
In an online study (N=156), people composed emails in four conditions (0/1/3/6 parallel suggestions)
Our results reveal (1) benefits for ideation, and costs for efficiency, when suggesting multiple phrases; (2) that non-native speakers benefit more from more suggestions; and (3) further insights into behaviour patterns.
arXiv Detail & Related papers (2021-01-22T15:32:32Z) - Towards Debiasing Sentence Representations [109.70181221796469]
We show that Sent-Debias is effective in removing biases, and at the same time, preserves performance on sentence-level downstream tasks.
We hope that our work will inspire future research on characterizing and removing social biases from widely adopted sentence representations for fairer NLP.
arXiv Detail & Related papers (2020-07-16T04:22:30Z) - Don't Let Me Be Misunderstood: Comparing Intentions and Perceptions in
Online Discussions [17.430757860728733]
We present a computational framework for exploring and comparing perspectives in online public discussions.
We combine logged data about public comments on Facebook with a survey of over 16,000 people about their intentions in writing these comments.
Our analysis focuses on judgments of whether a comment is stating a fact or an opinion, since these concepts were shown to be often confused.
arXiv Detail & Related papers (2020-04-28T15:43:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.