Relationship Between Online Harmful Behaviors and Social Network Message
Writing Style
- URL: http://arxiv.org/abs/2212.07526v1
- Date: Wed, 14 Dec 2022 22:13:55 GMT
- Title: Relationship Between Online Harmful Behaviors and Social Network Message
Writing Style
- Authors: Talia Sanchez Viera, Richard Khoury
- Abstract summary: We consider whether measurable differences in writing style relate to different personality types.
We study messages from nearly 2,500 users from two online communities (Twitter and Reddit)
We find that we can measure significant personality differences between regular and harmful users from the writing style of as few as 100 tweets or 40 Reddit posts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we explore the relationship between an individual's writing
style and the risk that they will engage in online harmful behaviors (such as
cyberbullying). In particular, we consider whether measurable differences in
writing style relate to different personality types, as modeled by the Big-Five
personality traits and the Dark Triad traits, and can differentiate between
users who do or do not engage in harmful behaviors. We study messages from
nearly 2,500 users from two online communities (Twitter and Reddit) and find
that we can measure significant personality differences between regular and
harmful users from the writing style of as few as 100 tweets or 40 Reddit
posts, aggregate these values to distinguish between healthy and harmful
communities, and also use style attributes to predict which users will engage
in harmful behaviors.
Related papers
- The Company You Keep: How LLMs Respond to Dark Triad Traits [7.65192155348112]
Large Language Models (LLMs) often exhibit highly agreeable and reinforcing conversational styles, also known as AI-sycophancy.<n>This study examines how LLMs respond to user prompts expressing varying degrees of Dark Triad traits (Machiavellianism, Narcissism, and Psychopathy) using a curated dataset.<n>Our findings raise implications for designing safer conversational systems that can detect and respond appropriately when users escalate from benign to harmful requests.
arXiv Detail & Related papers (2026-03-04T17:19:22Z) - Dark Personality Traits and Online Toxicity: Linking Self-Reports to Reddit Activity [0.2336460276005258]
Dark personality traits have been linked to online misbehavior such as trolling, incivility, and toxic speech.<n>Sadistic and psychopathic tendencies are most strongly associated with overtly toxic language.<n>Bright and dark traits interact in nuanced ways, with extraversion reducing trolling tendencies and conscientiousness showing modest associations with entitlement and callousness.
arXiv Detail & Related papers (2025-12-10T22:06:30Z) - The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs [60.15472325639723]
Personality traits have long been studied as predictors of human behavior.<n>Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems.
arXiv Detail & Related papers (2025-09-03T21:27:10Z) - Online posting effects: Unveiling the non-linear journeys of users in depression communities on Reddit [0.12564343689544843]
We introduce a data-informed framework reconstructing online dynamics from 303k users interacting over two years.
Our analysis unveils online posting effects: a user can transition to another psychological state after online exposure to peers' emotional/semantic content.
Interpreted in light of psychological literature, our findings can provide evidence that the type and layout of online social interactions have an impact on users' "journeys" when posting about depression.
arXiv Detail & Related papers (2023-11-29T14:45:11Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - A Tale of Two Cultures: Comparing Interpersonal Information Disclosure
Norms on Twitter [11.306726655546067]
We present an exploration of cultural norms surrounding online disclosure of information about one's interpersonal relationships on Twitter.
We collected more than 2 million tweets posted in the U.S. and India over a 3 month period which contain interpersonal relationship keywords.
We found differences in emotion, topic, and content disclosed between tweets from the U.S. versus India.
arXiv Detail & Related papers (2023-09-26T18:55:48Z) - User Identity Linkage in Social Media Using Linguistic and Social
Interaction Features [11.781485566149994]
User identity linkage aims to reveal social media accounts likely to belong to the same natural person.
This work proposes a machine learning-based detection model, which uses multiple attributes of users' online activity.
The models efficacy is demonstrated on two cases on abusive and terrorism-related Twitter content.
arXiv Detail & Related papers (2023-08-22T15:10:38Z) - Anticipated versus Actual Effects of Platform Design Change: A Case
Study of Twitter's Character Limit [17.925651625409678]
We study Twitter's decision to double the character limit from 140 to 280 characters to soothe users' need to ''cram'' or ''squeeze'' their tweets.
We find that even though users do not ''cram'' as much under 280 characters as they used to under 140 characters, emergent cramming'' at the new limit seems to not have been taken into account when designing the platform change.
arXiv Detail & Related papers (2022-08-30T16:59:19Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - A deep dive into the consistently toxic 1% of Twitter [9.669275987983447]
This study spans 14 years of tweets from 122K Twitter profiles and more than 293M tweets.
We selected the most extreme profiles in terms of consistency of toxic content and examined their tweet texts, and the domains, hashtags, and URLs they shared.
We found that these selected profiles keep to a narrow theme with lower diversity in hashtags, URLs, and domains, they are thematically similar to each other, and have a high likelihood of bot-like behavior.
arXiv Detail & Related papers (2022-02-16T04:21:48Z) - Analyzing Behavioral Changes of Twitter Users After Exposure to
Misinformation [1.8251012479962594]
We aim to understand whether general Twitter users changed their behavior after being exposed to misinformation.
We compare the before and after behavior of exposed users to determine whether the frequency of the tweets they posted underwent any significant change.
We also study the characteristics of two specific user groups, multi-exposure and extreme change groups, which were potentially highly impacted.
arXiv Detail & Related papers (2021-11-01T04:48:07Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.