CensorLab: A Testbed for Censorship Experimentation
- URL: http://arxiv.org/abs/2412.16349v2
- Date: Thu, 09 Jan 2025 17:55:17 GMT
- Title: CensorLab: A Testbed for Censorship Experimentation
- Authors: Jade Sheffey, Amir Houmansadr,
- Abstract summary: We design and implement CensorLab, a generic platform for emulating Internet censorship scenarios.
CensorLab aims to support all censorship mechanisms previously or currently deployed by real-world censors.
It provides an easy-to-use platform for researchers and practitioners enabling them to perform extensive experimentation.
- Score: 15.411134921415567
- License:
- Abstract: Censorship and censorship circumvention are closely connected, and each is constantly making decisions in reaction to the other. When censors deploy a new Internet censorship technique, the anti-censorship community scrambles to find and develop circumvention strategies against the censor's new strategy, i.e., by targeting and exploiting specific vulnerabilities in the new censorship mechanism. We believe that over-reliance on such a reactive approach to circumvention has given the censors the upper hand in the censorship arms race, becoming a key reason for the inefficacy of in-the-wild circumvention systems. Therefore, we argue for a proactive approach to censorship research: the anti-censorship community should be able to proactively develop circumvention mechanisms against hypothetical or futuristic censorship strategies. To facilitate proactive censorship research, we design and implement CensorLab, a generic platform for emulating Internet censorship scenarios. CensorLab aims to complement currently reactive circumvention research by efficiently emulating past, present, and hypothetical censorship strategies in realistic network environments. Specifically, CensorLab aims to (1) support all censorship mechanisms previously or currently deployed by real-world censors; (2) support the emulation of hypothetical (not-yet-deployed) censorship strategies including advanced data-driven censorship mechanisms (e.g., ML-based traffic classifiers); (3) provide an easy-to-use platform for researchers and practitioners enabling them to perform extensive experimentation; and (4) operate efficiently with minimal overhead. We have implemented CensorLab as a fully functional, flexible, and high-performance platform, and showcase how it can be used to emulate a wide range of censorship scenarios, from traditional IP blocking and keyword filtering to hypothetical ML-based censorship mechanisms.
Related papers
- Pathfinder: Exploring Path Diversity for Assessing Internet Censorship Inconsistency [8.615061541238589]
We investigate Internet censorship from a different perspective by scrutinizing the diverse censorship deployment inside a country.
We reveal that the diversity of Internet censorship caused by different routing paths inside a country is prevalent.
We identify that different hosting platforms also result in inconsistent censorship activities due to different peering relationships with the ISPs in a country.
arXiv Detail & Related papers (2024-07-05T01:48:31Z) - Private Online Community Detection for Censored Block Models [60.039026645807326]
We study the private online change detection problem for dynamic communities, using a censored block model (CBM)
We propose an algorithm capable of identifying changes in the community structure, while maintaining user privacy.
arXiv Detail & Related papers (2024-05-09T12:35:57Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - LLM Censorship: A Machine Learning Challenge or a Computer Security
Problem? [52.71988102039535]
We show that semantic censorship can be perceived as an undecidable problem.
We argue that the challenges extend beyond semantic censorship, as knowledgeable attackers can reconstruct impermissible outputs.
arXiv Detail & Related papers (2023-07-20T09:25:02Z) - Augmenting Rule-based DNS Censorship Detection at Scale with Machine
Learning [38.00013408742201]
Censorship of the domain name system (DNS) is a key mechanism used across different countries.
In this paper, we explore how machine learning (ML) models can help streamline the detection process.
We find that unsupervised models, trained solely on uncensored instances, can identify new instances and variations of censorship missed by existing probes.
arXiv Detail & Related papers (2023-02-03T23:36:30Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - How We Express Ourselves Freely: Censorship, Self-censorship, and
Anti-censorship on a Chinese Social Media [4.408128846525362]
We identify the metrics of censorship and self-censorship, find the influence factors, and construct a mediation model to measure their relationship.
Based on these findings, we discuss implications for democratic social media design and future censorship research.
arXiv Detail & Related papers (2022-11-24T18:28:16Z) - Initiative Defense against Facial Manipulation [82.96864888025797]
We propose a novel framework of initiative defense to degrade the performance of facial manipulation models controlled by malicious users.
We first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom.
arXiv Detail & Related papers (2021-12-19T09:42:28Z) - Is radicalization reinforced by social media censorship? [0.0]
Radicalized beliefs, such as those tied to QAnon, Russiagate, and other political conspiracy theories, can lead some individuals and groups to engage in violent behavior.
This article presents and agent-based model of a social media network that enables investigation of the effects of censorship on the amount of dissenting information.
arXiv Detail & Related papers (2021-03-23T21:07:34Z) - Linguistic Fingerprints of Internet Censorship: the Case of SinaWeibo [4.544151613454639]
This paper studies how the linguistic components of blogposts might affect the blogposts' likelihood of being censored.
We build a classifier that significantly outperforms non-expert humans in predicting whether a blogpost will be censored.
Our work suggests that it is possible to use linguistic properties of social media posts to automatically predict if they are going to be censored.
arXiv Detail & Related papers (2020-01-23T23:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.