MTTM: Metamorphic Testing for Textual Content Moderation Software
- URL: http://arxiv.org/abs/2302.05706v1
- Date: Sat, 11 Feb 2023 14:44:39 GMT
- Title: MTTM: Metamorphic Testing for Textual Content Moderation Software
- Authors: Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang,
Shuqing Li, Pinjia He, Michael Lyu
- Abstract summary: Social media platforms have been increasingly exploited to propagate toxic content.
malicious users can evade moderation by changing only a few words in the toxic content.
We propose MTTM, a Metamorphic Testing framework for Textual content Moderation software.
- Score: 11.759353169546646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The exponential growth of social media platforms such as Twitter and Facebook
has revolutionized textual communication and textual content publication in
human society. However, they have been increasingly exploited to propagate
toxic content, such as hate speech, malicious advertisement, and pornography,
which can lead to highly negative impacts (e.g., harmful effects on teen mental
health). Researchers and practitioners have been enthusiastically developing
and extensively deploying textual content moderation software to address this
problem. However, we find that malicious users can evade moderation by changing
only a few words in the toxic content. Moreover, modern content moderation
software performance against malicious inputs remains underexplored. To this
end, we propose MTTM, a Metamorphic Testing framework for Textual content
Moderation software. Specifically, we conduct a pilot study on 2,000 text
messages collected from real users and summarize eleven metamorphic relations
across three perturbation levels: character, word, and sentence. MTTM employs
these metamorphic relations on toxic textual contents to generate test cases,
which are still toxic yet likely to evade moderation. In our evaluation, we
employ MTTM to test three commercial textual content moderation software and
two state-of-the-art moderation algorithms against three kinds of toxic
content. The results show that MTTM achieves up to 83.9%, 51%, and 82.5% error
finding rates (EFR) when testing commercial moderation software provided by
Google, Baidu, and Huawei, respectively, and it obtains up to 91.2% EFR when
testing the state-of-the-art algorithms from the academy. In addition, we
leverage the test cases generated by MTTM to retrain the model we explored,
which largely improves model robustness (0% to 5.9% EFR) while maintaining the
accuracy on the original test set.
Related papers
- Automated Testing for Text-to-Image Software [0.0]
ACTesting is an automated cross-modal testing method for text-to-image (T2I) software.
We show that ACTesting can generate error-revealing tests, reducing the text-image consistency by up to 20% compared with the baseline.
The results demonstrate that ACTesting can identify abnormal behaviors of T2I software effectively.
arXiv Detail & Related papers (2023-12-20T11:19:23Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - An Image is Worth a Thousand Toxic Words: A Metamorphic Testing
Framework for Content Moderation Software [64.367830425115]
Social media platforms are being increasingly misused to spread toxic content, including hate speech, malicious advertising, and pornography.
Despite tremendous efforts in developing and deploying content moderation methods, malicious users can evade moderation by embedding texts into images.
We propose a metamorphic testing framework for content moderation software.
arXiv Detail & Related papers (2023-08-18T20:33:06Z) - Validating Multimedia Content Moderation Software via Semantic Fusion [16.322773343799575]
We introduce Semantic Fusion, a general, effective methodology for validating multimedia content moderation software.
We employ DUO to test five commercial content moderation software and two state-of-the-art models against three kinds of toxic content.
The results show that DUO achieves up to 100% error finding rate (EFR) when testing moderation software.
arXiv Detail & Related papers (2023-05-23T02:44:15Z) - NoisyHate: Benchmarking Content Moderation Machine Learning Models with
Human-Written Perturbations Online [14.95221806760152]
This paper introduces a benchmark test set containing human-written perturbations online for toxic speech detection models.
We also test this data on state-of-the-art language models, such as BERT and RoBERTa, to demonstrate the adversarial attack with real human-written perturbations is still effective.
arXiv Detail & Related papers (2023-03-18T14:54:57Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Detoxifying Text with MaRCo: Controllable Revision with Experts and
Anti-Experts [57.38912708076231]
We introduce MaRCo, a detoxification algorithm that combines controllable generation and text rewriting methods.
MaRCo uses likelihoods under a non-toxic LM and a toxic LM to find candidate words to mask and potentially replace.
We evaluate our method on several subtle toxicity and microaggressions datasets, and show that it not only outperforms baselines on automatic metrics, but MaRCo's rewrites are preferred 2.1 $times$ more in human evaluation.
arXiv Detail & Related papers (2022-12-20T18:50:00Z) - Toxicity Detection for Indic Multilingual Social Media Content [0.0]
This paper describes the system proposed by team 'Moj Masti' using the data provided by ShareChat/Moj in emphIIIT-D Abusive Comment Identification challenge.
We focus on how we can leverage multilingual transformer based pre-trained and fine-tuned models to approach code-mixed/code-switched classification tasks.
arXiv Detail & Related papers (2022-01-03T12:01:47Z) - RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
Models [93.151822563361]
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment.
We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.
arXiv Detail & Related papers (2020-09-24T03:17:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.