Strategic Analysis of Dissent and Self-Censorship
- URL: http://arxiv.org/abs/2509.03731v1
- Date: Wed, 03 Sep 2025 21:33:01 GMT
- Title: Strategic Analysis of Dissent and Self-Censorship
- Authors: Joshua J. Daymude, Robert Axelrod, Stephanie Forrest,
- Abstract summary: We study the tradeoff between expressing dissent and avoiding punishment through self-censorship.<n>We find that for any population, there exists an authority policy that leads to total self-censorship.
- Score: 0.6882042556551612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Expressions of dissent against authority are an important feature of most societies, and efforts to suppress such expressions are common. Modern digital communications, social media, and Internet surveillance and censorship technologies are changing the landscape of public speech and dissent. Especially in authoritarian settings, individuals must assess the risk of voicing their true opinions or choose self-censorship, voluntarily moderating their behavior to comply with authority. We present a model in which individuals strategically manage the tradeoff between expressing dissent and avoiding punishment through self-censorship while an authority adapts its policies to minimize both total expressed dissent and punishment costs. We study the model analytically and in simulation to derive conditions separating defiant individuals who express their desired dissent in spite of punishment from self-censoring individuals who fully or partially limit their expression. We find that for any population, there exists an authority policy that leads to total self-censorship. However, the probability and time for an initially moderate, locally-adaptive authority to suppress dissent depend critically on the population's willingness to withstand punishment early on, which can deter the authority from adopting more extreme policies.
Related papers
- The Disintegration of Free Speech [2.28438857884398]
This Article examines the constitutional status of AI-mediated communication under the First Amendment.<n>It argues that under existing jurisprudence, AI-generated content is protected speech.<n>The Article concludes that this doctrinal trajectory risks severing the First Amendment from its democratic foundations.
arXiv Detail & Related papers (2026-02-28T17:57:58Z) - Are LLMs Good Safety Agents or a Propaganda Engine? [74.88607730071483]
PSP is a dataset built specifically to probe the refusal behaviors in Large Language Models from an explicitly political context.<n> PSP is built by formatting existing censored content from two data sources, openly available on the internet: sensitive prompts in China generalized to multiple countries, and tweets that have been censored in various countries.<n>We study: 1) impact of political sensitivity in seven LLMs through data-driven (making PSP implicit) and representation-level approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs)
arXiv Detail & Related papers (2025-11-28T13:36:00Z) - HatePRISM: Policies, Platforms, and Research Integration. Advancing NLP for Hate Speech Proactive Mitigation [67.69631485036665]
We conduct a comprehensive examination of hate speech regulations and strategies from three perspectives.<n>Our findings reveal significant inconsistencies in hate speech definitions and moderation practices across jurisdictions.<n>We suggest ideas and research direction for further exploration of a unified framework for automated hate speech moderation.
arXiv Detail & Related papers (2025-07-06T11:25:23Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - Political Neutrality in AI Is Impossible- But Here Is How to Approximate It [97.59456676216115]
We argue that true political neutrality is neither feasible nor universally desirable due to its subjective nature and the biases inherent in AI training data, algorithms, and user interactions.<n>We use the term "approximation" of political neutrality to shift the focus from unattainable absolutes to achievable, practical proxies.
arXiv Detail & Related papers (2025-02-18T16:48:04Z) - CensorLab: A Testbed for Censorship Experimentation [15.411134921415567]
We design and implement CensorLab, a generic platform for emulating Internet censorship scenarios.<n>CensorLab aims to support all censorship mechanisms previously or currently deployed by real-world censors.<n>It provides an easy-to-use platform for researchers and practitioners enabling them to perform extensive experimentation.
arXiv Detail & Related papers (2024-12-20T21:17:24Z) - Toxic behavior silences online political conversations [0.0]
We investigate the hypothesis that individuals may refrain from expressing minority opinions publicly due to being exposed to toxic behavior.<n>Using hidden Markov models, we identify a latent state consistent with toxicity-driven silence.<n>Our findings offer insights into the intricacies of online political deliberation and emphasize the importance of considering self-censorship dynamics.
arXiv Detail & Related papers (2024-12-07T20:39:20Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - Rational Silence and False Polarization: How Viewpoint Organizations and Recommender Systems Distort the Expression of Public Opinion [4.419843514606336]
We show how platforms impact what observers of online discourse come to believe about community views.<n>We show that signals from ideological organizations encourage an increase in rhetorical intensity, leading to the 'rational silence' of moderate users.<n>We identify practical strategies platforms can implement, such as reducing exposure to signals from ideological organizations.
arXiv Detail & Related papers (2024-03-10T17:02:19Z) - JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding
over Small Language Models [53.83273575102087]
We propose an unsupervised inference-time approach to authorship obfuscation.
We introduce JAMDEC, a user-controlled, inference-time algorithm for authorship obfuscation.
Our approach builds on small language models such as GPT2-XL in order to help avoid disclosing the original content to proprietary LLM's APIs.
arXiv Detail & Related papers (2024-02-13T19:54:29Z) - How We Express Ourselves Freely: Censorship, Self-censorship, and
Anti-censorship on a Chinese Social Media [4.408128846525362]
We identify the metrics of censorship and self-censorship, find the influence factors, and construct a mediation model to measure their relationship.
Based on these findings, we discuss implications for democratic social media design and future censorship research.
arXiv Detail & Related papers (2022-11-24T18:28:16Z) - Is radicalization reinforced by social media censorship? [0.0]
Radicalized beliefs, such as those tied to QAnon, Russiagate, and other political conspiracy theories, can lead some individuals and groups to engage in violent behavior.
This article presents and agent-based model of a social media network that enables investigation of the effects of censorship on the amount of dissenting information.
arXiv Detail & Related papers (2021-03-23T21:07:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.