Don't Let Me Be Misunderstood: Comparing Intentions and Perceptions in
Online Discussions
- URL: http://arxiv.org/abs/2004.13609v1
- Date: Tue, 28 Apr 2020 15:43:46 GMT
- Title: Don't Let Me Be Misunderstood: Comparing Intentions and Perceptions in
Online Discussions
- Authors: Jonathan P. Chang, Justin Cheng, Cristian Danescu-Niculescu-Mizil
- Abstract summary: We present a computational framework for exploring and comparing perspectives in online public discussions.
We combine logged data about public comments on Facebook with a survey of over 16,000 people about their intentions in writing these comments.
Our analysis focuses on judgments of whether a comment is stating a fact or an opinion, since these concepts were shown to be often confused.
- Score: 17.430757860728733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discourse involves two perspectives: a person's intention in making an
utterance and others' perception of that utterance. The misalignment between
these perspectives can lead to undesirable outcomes, such as misunderstandings,
low productivity and even overt strife. In this work, we present a
computational framework for exploring and comparing both perspectives in online
public discussions.
We combine logged data about public comments on Facebook with a survey of
over 16,000 people about their intentions in writing these comments or about
their perceptions of comments that others had written. Unlike previous studies
of online discussions that have largely relied on third-party labels to
quantify properties such as sentiment and subjectivity, our approach also
directly captures what the speakers actually intended when writing their
comments. In particular, our analysis focuses on judgments of whether a comment
is stating a fact or an opinion, since these concepts were shown to be often
confused.
We show that intentions and perceptions diverge in consequential ways. People
are more likely to perceive opinions than to intend them, and linguistic cues
that signal how an utterance is intended can differ from those that signal how
it will be perceived. Further, this misalignment between intentions and
perceptions can be linked to the future health of a conversation: when a
comment whose author intended to share a fact is misperceived as sharing an
opinion, the subsequent conversation is more likely to derail into uncivil
behavior than when the comment is perceived as intended. Altogether, these
findings may inform the design of discussion platforms that better promote
positive interactions.
Related papers
- Inside the echo chamber: Linguistic underpinnings of misinformation on Twitter [4.62503518282081]
Social media users drive the spread of misinformation online by sharing posts that include erroneous information or commenting on controversial topics.
This work explores how conversations around misinformation are mediated through language use.
arXiv Detail & Related papers (2024-04-24T15:37:12Z) - Co-Writing with Opinionated Language Models Affects Users' Views [27.456483236562434]
This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write.
Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society.
Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey.
arXiv Detail & Related papers (2023-02-01T16:26:32Z) - Hate Speech and Counter Speech Detection: Conversational Context Does
Matter [7.333666276087548]
This paper investigates the role of conversational context in the annotation and detection of online hate and counter speech.
We created a context-aware dataset for a 3-way classification task on Reddit comments: hate speech, counter speech, or neutral.
arXiv Detail & Related papers (2022-06-13T19:05:44Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Did they answer? Subjective acts and intents in conversational discourse [48.63528550837949]
We present the first discourse dataset with multiple and subjective interpretations of English conversation.
We show disagreements are nuanced and require a deeper understanding of the different contextual factors.
arXiv Detail & Related papers (2021-04-09T16:34:19Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - How Furiously Can Colourless Green Ideas Sleep? Sentence Acceptability
in Context [17.4919556893898]
We compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context.
Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings.
In relevant contexts we observe a discourse coherence effect which uniformly raises acceptability.
arXiv Detail & Related papers (2020-04-02T08:58:44Z) - Towards Quantifying the Distance between Opinions [66.29568619199074]
We find that measures based solely on text similarity or on overall sentiment often fail to effectively capture the distance between opinions.
We propose a new distance measure for capturing the similarity between opinions that leverages the nuanced observation.
In an unsupervised setting, our distance measure achieves significantly better Adjusted Rand Index scores (up to 56x) and Silhouette coefficients (up to 21x) compared to existing approaches.
arXiv Detail & Related papers (2020-01-27T16:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.