REDDIX-NET: A Novel Dataset and Benchmark for Moderating Online Explicit Services
- URL: http://arxiv.org/abs/2505.23231v1
- Date: Thu, 29 May 2025 08:34:13 GMT
- Title: REDDIX-NET: A Novel Dataset and Benchmark for Moderating Online Explicit Services
- Authors: MSVPJ Sathvik, Manan Roy Choudhury, Rishita Agarwal, Sathwik Narkedimilli, Vivek Gupta,
- Abstract summary: REDDIX-NET is a novel benchmark dataset specifically designed for moderating online sexual services.<n>The dataset is derived from thousands of web-scraped NSFW posts on Reddit.<n>We evaluate the classification performance of state-of-the-art large language models.
- Score: 5.212078389585781
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rise of online platforms has enabled covert illicit activities, including online prostitution, to pose challenges for detection and regulation. In this study, we introduce REDDIX-NET, a novel benchmark dataset specifically designed for moderating online sexual services and going beyond traditional NSFW filters. The dataset is derived from thousands of web-scraped NSFW posts on Reddit and categorizes users into six behavioral classes reflecting different service offerings and user intentions. We evaluate the classification performance of state-of-the-art large language models (GPT-4, LlaMA 3.3-70B-Instruct, Gemini 1.5 Flash, Mistral 8x7B, Qwen 2.5 Turbo, Claude 3.5 Haiku) using advanced quantitative metrics, finding promising results with models like GPT-4 and Gemini 1.5 Flash. Beyond classification, we conduct sentiment and comment analysis, leveraging LLM and PLM-based approaches and metadata extraction to uncover behavioral and temporal patterns. These analyses reveal peak engagement times and distinct user interaction styles across categories. Our findings provide critical insights into AI-driven moderation and enforcement, offering a scalable framework for platforms to combat online prostitution and associated harms.
Related papers
- Can LLMs Talk 'Sex'? Exploring How AI Models Handle Intimate Conversations [0.0]
This study examines how four prominent large language models handle sexually oriented requests through qualitative content analysis.<n>Claude 3.7 Sonnet employs strict and consistent prohibitions, while GPT-4o navigates user interactions through nuanced contextual redirection.<n> Gemini 2.5 Flash exhibits permissiveness with threshold-based limits, and Deepseek-V3 demonstrates troublingly inconsistent boundary enforcement and performative refusals.
arXiv Detail & Related papers (2025-06-05T18:55:37Z) - Benchmarking Unified Face Attack Detection via Hierarchical Prompt Tuning [58.16354555208417]
PAD and FFD are proposed to protect face data from physical media-based Presentation Attacks and digital editing-based DeepFakes, respectively.<n>The lack of a Unified Face Attack Detection model to simultaneously handle attacks in these two categories is mainly attributed to two factors.<n>We present a novel Visual-Language Model-based Hierarchical Prompt Tuning Framework that adaptively explores multiple classification criteria from different semantic spaces.
arXiv Detail & Related papers (2025-05-19T16:35:45Z) - Towards Realistic Evaluation of Commit Message Generation by Matching Online and Offline Settings [77.20838441870151]
We use an online metric - the number of edits users introduce before committing the generated messages to the VCS - to select metrics for offline experiments.<n>We collect a dataset with 57 pairs consisting of commit messages generated by GPT-4 and their counterparts edited by human experts.<n>Our results indicate that edit distance exhibits the highest correlation with the online metric, whereas commonly used similarity metrics such as BLEU and METEOR demonstrate low correlation.
arXiv Detail & Related papers (2024-10-15T20:32:07Z) - Re-examining Sexism and Misogyny Classification with Annotator Attitudes [9.544313152137262]
Gender-Based Violence (GBV) is an increasing problem online, but existing datasets fail to capture the plurality of possible annotator perspectives.
We revisit two important stages in the moderation pipeline for GBV: (1) manual data labelling; and (2) automated classification.
arXiv Detail & Related papers (2024-10-04T15:57:58Z) - Generative Pre-trained Ranking Model with Over-parameterization at Web-Scale (Extended Abstract) [73.57710917145212]
Learning to rank is widely employed in web searches to prioritize pertinent webpages based on input queries.
We propose a emphulineGenerative ulineSemi-ulineSupervised ulinePre-trained (GS2P) model to address these challenges.
We conduct extensive offline experiments on both a publicly available dataset and a real-world dataset collected from a large-scale search engine.
arXiv Detail & Related papers (2024-09-25T03:39:14Z) - Understanding the performance gap between online and offline alignment algorithms [63.137832242488926]
We show that offline algorithms train policy to become good at pairwise classification, while online algorithms are good at generations.
This hints at a unique interplay between discriminative and generative capabilities, which is greatly impacted by the sampling process.
Our study sheds light on the pivotal role of on-policy sampling in AI alignment, and hints at certain fundamental challenges of offline alignment algorithms.
arXiv Detail & Related papers (2024-05-14T09:12:30Z) - Phishing Website Detection through Multi-Model Analysis of HTML Content [0.0]
This study addresses the pressing issue of phishing by introducing an advanced detection model that meticulously focuses on HTML content.
Our proposed approach integrates a specialized Multi-Layer Perceptron (MLP) model for structured tabular data and two pretrained Natural Language Processing (NLP) models for analyzing textual features.
The fusion of two NLP and one model,termed MultiText-LP, achieves impressive results, yielding a 96.80 F1 score and a 97.18 accuracy score on our research dataset.
arXiv Detail & Related papers (2024-01-09T21:08:13Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - Temporal Interest Network for User Response Prediction [33.95334897033641]
We propose a Temporal Interest Network (TIN) to capture the semantic-temporal correlation simultaneously between behaviors and the target.
TIN has been successfully deployed in production since October 2023, serving the WeChat Moments traffic.
arXiv Detail & Related papers (2023-08-15T05:48:44Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Detecting Online Hate Speech: Approaches Using Weak Supervision and
Network Embedding Models [2.3322477552758234]
We propose a weak supervision deep learning model that quantitatively uncover hateful users and (ii) present a novel qualitative analysis to uncover indirect hateful conversations.
We evaluate our model on 19.2M posts and show that our weak supervision model outperforms the baseline models in identifying indirect hateful interactions.
We also analyze a multilayer network, constructed from two types of user interactions in Gab(quote and reply) and interaction scores from the weak supervision model as edge weights, to predict hateful users.
arXiv Detail & Related papers (2020-07-24T18:13:52Z) - Learning Diverse Fashion Collocation by Neural Graph Filtering [78.9188246136867]
We propose a novel fashion collocation framework, Neural Graph Filtering, that models a flexible set of fashion items via a graph neural network.
By applying symmetric operations on the edge vectors, this framework allows varying numbers of inputs/outputs and is invariant to their ordering.
We evaluate the proposed approach on three popular benchmarks, the Polyvore dataset, the Polyvore-D dataset, and our reorganized Amazon Fashion dataset.
arXiv Detail & Related papers (2020-03-11T16:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.