Social Media Bot Policies: Evaluating Passive and Active Enforcement
- URL: http://arxiv.org/abs/2409.18931v1
- Date: Fri, 27 Sep 2024 17:28:25 GMT
- Title: Social Media Bot Policies: Evaluating Passive and Active Enforcement
- Authors: Kristina Radivojevic, Christopher McAleer, Catrell Conley, Cormac Kennedy, Paul Brenner,
- Abstract summary: Multimodal Foundation Models (MFMs) may facilitate malicious actors in the exploitation of online users.
We examined the bot and content policies of eight popular social media platforms: X (formerly Twitter), Instagram, Facebook, Threads, TikTok, Mastodon, Reddit, and LinkedIn.
Our findings indicate significant vulnerabilities within the current enforcement mechanisms of these platforms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of Multimodal Foundation Models (MFMs) holds significant promise for transforming social media platforms. However, this advancement also introduces substantial security and ethical concerns, as it may facilitate malicious actors in the exploitation of online users. We aim to evaluate the strength of security protocols on prominent social media platforms in mitigating the deployment of MFM bots. We examined the bot and content policies of eight popular social media platforms: X (formerly Twitter), Instagram, Facebook, Threads, TikTok, Mastodon, Reddit, and LinkedIn. Using Selenium, we developed a web bot to test bot deployment and AI-generated content policies and their enforcement mechanisms. Our findings indicate significant vulnerabilities within the current enforcement mechanisms of these platforms. Despite having explicit policies against bot activity, all platforms failed to detect and prevent the operation of our MFM bots. This finding reveals a critical gap in the security measures employed by these social media platforms, underscoring the potential for malicious actors to exploit these weaknesses to disseminate misinformation, commit fraud, or manipulate users.
Related papers
- Entendre, a Social Bot Detection Tool for Niche, Fringe, and Extreme Social Media [1.4913052010438639]
We introduce Entendre, an open-access, scalable, and platform-agnostic bot detection framework.
We exploit the idea that most social platforms share a generic template, where users can post content, approve content, and provide a bio.
To demonstrate Entendre's effectiveness, we used it to explore the presence of bots among accounts posting racist content on the now-defunct right-wing platform Parler.
arXiv Detail & Related papers (2024-08-13T13:50:49Z) - Adversarial Botometer: Adversarial Analysis for Social Bot Detection [1.9280536006736573]
Social bots produce content that mimics human creativity.
Malicious social bots emerge to deceive people with their unrealistic content.
We evaluate the behavior of a text-based bot detector in a competitive environment.
arXiv Detail & Related papers (2024-05-03T11:28:21Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical
Reinforcement Learning [31.33996447671789]
We show that it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
Our proposed policy networks train with a vast amount of synthetic graphs and generalize better than baselines on unseen real-life graphs.
This makes our approach a practical adversarial attack when deployed in a real-life setting.
arXiv Detail & Related papers (2021-10-20T16:49:26Z) - BotNet Detection On Social Media [0.0]
Social media has become an open playground for user (bot) accounts trying to manipulate other users using these platforms.
There has been evidence of bots manipulating the election results which can be a great threat to the whole nation and hence the whole world.
Our goal is to leverage semantic web mining techniques to identify fake bots or accounts involved in these activities.
arXiv Detail & Related papers (2021-10-12T00:38:51Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.