TikGuard: A Deep Learning Transformer-Based Solution for Detecting Unsuitable TikTok Content for Kids
- URL: http://arxiv.org/abs/2410.00403v1
- Date: Tue, 1 Oct 2024 05:00:05 GMT
- Title: TikGuard: A Deep Learning Transformer-Based Solution for Detecting Unsuitable TikTok Content for Kids
- Authors: Mazen Balat, Mahmoud Essam Gabr, Hend Bakr, Ahmed B. Zaky,
- Abstract summary: This paper introduces TikGuard, a transformer-based deep learning approach aimed at detecting and flagging content unsuitable for children on TikTok.
By using a specially curated dataset, TikHarm, and leveraging advanced video classification techniques, TikGuard achieves an accuracy of 86.7%.
While direct comparisons are limited by the uniqueness of the TikHarm dataset, TikGuard's performance highlights its potential in enhancing content moderation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of short-form videos on platforms like TikTok has brought new challenges in safeguarding young viewers from inappropriate content. Traditional moderation methods often fall short in handling the vast and rapidly changing landscape of user-generated videos, increasing the risk of children encountering harmful material. This paper introduces TikGuard, a transformer-based deep learning approach aimed at detecting and flagging content unsuitable for children on TikTok. By using a specially curated dataset, TikHarm, and leveraging advanced video classification techniques, TikGuard achieves an accuracy of 86.7%, showing a notable improvement over existing methods in similar contexts. While direct comparisons are limited by the uniqueness of the TikHarm dataset, TikGuard's performance highlights its potential in enhancing content moderation, contributing to a safer online experience for minors. This study underscores the effectiveness of transformer models in video classification and sets a foundation for future research in this area.
Related papers
- Catching Dark Signals in Algorithms: Unveiling Audiovisual and Thematic Markers of Unsafe Content Recommended for Children and Teenagers [13.39320891153433]
The prevalence of short form video platforms, combined with the ineffectiveness of age verification mechanisms, raises concerns about the potential harms facing children and teenagers in an algorithm-moderated online environment.<n>We conducted multimodal feature analysis and thematic topic modeling of 4,492 short videos recommended to children and teenagers on Instagram Reels, TikTok, and YouTube Shorts.<n>This feature-level and content-level analysis revealed that unsafe (i.e., problematic, mentally distressing) short videos possess darker visual features and contain explicitly harmful content and implicit harm from anxiety-inducing ordinary content.
arXiv Detail & Related papers (2025-07-16T18:41:42Z) - When Kids Mode Isn't For Kids: Investigating TikTok's "Under 13 Experience" [3.7436113672723534]
TikTok, the social media platform, offers a more restrictive "Under 13 Experience" exclusively for young users in the US, also known as TikTok's "Kids Mode"<n>While prior research has studied various aspects of TikTok's regular mode, TikTok's Kids Mode remains understudied.<n>We find that 83% of videos observed on the "For You" page in Kids Mode are actually not child-directed, and even inappropriate content was found.
arXiv Detail & Related papers (2025-06-30T22:31:31Z) - SNIFR : Boosting Fine-Grained Child Harmful Content Detection Through Audio-Visual Alignment with Cascaded Cross-Transformer [6.590879020134438]
Malicious users exploit moderation systems by embedding unsafe content in minimal frames to evade detection.<n>In this study, we embed audio cues with visual for fine-grained child harmful content detection and introduce SNIFR, a novel framework for effective alignment.
arXiv Detail & Related papers (2025-06-03T20:37:23Z) - Protecting Young Users on Social Media: Evaluating the Effectiveness of Content Moderation and Legal Safeguards on Video Sharing Platforms [0.8198234257428011]
We evaluated the effectiveness of video moderation for different age groups on TikTok, YouTube, and Instagram.<n>For passive scrolling, accounts assigned to the age 13 group encountered videos that were deemed harmful more frequently and quickly than those assigned to the age 18 group.<n>Exposure occurred without user-initiated searches, indicating weaknesses in the algorithmic filtering systems.
arXiv Detail & Related papers (2025-05-16T12:06:42Z) - EdgeAIGuard: Agentic LLMs for Minor Protection in Digital Spaces [13.180252900900854]
We propose the EdgeAIGuard content moderation approach to protect minors from online grooming and various forms of digital exploitation.
The proposed method comprises a multi-agent architecture deployed strategically at the network edge to enable rapid detection with low latency and prevent harmful content targeting minors.
arXiv Detail & Related papers (2025-02-28T16:29:34Z) - Enhance-A-Video: Better Generated Video for Free [57.620595159855064]
We introduce a training-free approach to enhance the coherence and quality of DiT-based generated videos.
Our approach can be easily applied to most DiT-based video generation frameworks without any retraining or fine-tuning.
arXiv Detail & Related papers (2025-02-11T12:22:35Z) - The Value of Nothing: Multimodal Extraction of Human Values Expressed by TikTok Influencers [2.3592914313389253]
In this paper, we extract implicit values from TikTok movies uploaded by online influencers targeting children and adolescents.
We curated a dataset of hundreds of TikTok movies and annotated them according to the Schwartz Theory of Personal Values.
Our results pave the way to further research on influence and value transmission in video-based social platforms.
arXiv Detail & Related papers (2025-01-20T22:21:18Z) - Conspiracy theories and where to find them on TikTok [3.424635462664968]
Concerns have been raised about the potential of TikTok to promote and amplify online harmful and dangerous content.
Our study analyzes the presence of videos promoting conspiracy theories, providing a lower-bound estimate of their prevalence.
We evaluate the capabilities of state-of-the-art open Large Language Models to identify conspiracy theories after extracting audio transcriptions of videos.
arXiv Detail & Related papers (2024-07-17T13:28:11Z) - VITON-DiT: Learning In-the-Wild Video Try-On from Human Dance Videos via Diffusion Transformers [53.45587477621942]
We propose the first DiT-based video try-on framework for practical in-the-wild applications, named VITON-DiT.
Specifically, VITON-DiT consists of a garment extractor, a Spatial-Temporal denoising DiT, and an identity preservation ControlNet.
We also introduce random selection strategies during training and an Interpolated Auto-Regressive (IAR) technique at inference to facilitate long video generation.
arXiv Detail & Related papers (2024-05-28T16:21:03Z) - Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - Teacher Agent: A Knowledge Distillation-Free Framework for
Rehearsal-based Video Incremental Learning [29.52218286906986]
Rehearsal-based video incremental learning often employs knowledge distillation to mitigate catastrophic forgetting of previously learned data.
We propose a knowledge distillation-free framework for rehearsal-based video incremental learning called textitTeacher Agent.
arXiv Detail & Related papers (2023-06-01T06:54:56Z) - Supervised Masked Knowledge Distillation for Few-Shot Transformers [36.46755346410219]
We propose a novel Supervised Masked Knowledge Distillation model (SMKD) for few-shot Transformers.
Compared with previous self-supervised methods, we allow intra-class knowledge distillation on both class and patch tokens.
Our method with simple design outperforms previous methods by a large margin and achieves a new start-of-the-art.
arXiv Detail & Related papers (2023-03-25T03:31:46Z) - Weakly Supervised Two-Stage Training Scheme for Deep Video Fight
Detection Model [0.0]
Fight detection in videos is an emerging deep learning application with today's prevalence of surveillance systems and streaming media.
Previous work has largely relied on action recognition techniques to tackle this problem.
We design the fight detection model as a composition of an action-aware feature extractor and an anomaly score generator.
arXiv Detail & Related papers (2022-09-23T08:29:16Z) - Unsupervised Domain Adaptation for Video Transformers in Action
Recognition [76.31442702219461]
We propose a simple and novel UDA approach for video action recognition.
Our approach builds a robust source model that better generalises to target domain.
We report results on two video action benchmarks recognition for UDA.
arXiv Detail & Related papers (2022-07-26T12:17:39Z) - Anomaly detection in surveillance videos using transformer based
attention model [3.2968779106235586]
This research suggests using a weakly supervised strategy to avoid annotating anomalous segments in training videos.
The proposed framework is validated on real-world dataset i.e. ShanghaiTech Campus dataset.
arXiv Detail & Related papers (2022-06-03T12:19:39Z) - Weakly Supervised Video Salient Object Detection [79.51227350937721]
We present the first weakly supervised video salient object detection model based on relabeled "fixation guided scribble annotations"
An "Appearance-motion fusion module" and bidirectional ConvLSTM based framework are proposed to achieve effective multi-modal learning and long-term temporal context modeling.
arXiv Detail & Related papers (2021-04-06T09:48:38Z) - Semi-Supervised Action Recognition with Temporal Contrastive Learning [50.08957096801457]
We learn a two-pathway temporal contrastive model using unlabeled videos at two different speeds.
We considerably outperform video extensions of sophisticated state-of-the-art semi-supervised image recognition methods.
arXiv Detail & Related papers (2021-02-04T17:28:35Z) - Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed
Videos [82.02074241700728]
In this paper, we present a prohibitive-level action recognition model that is trained with only video-frame labels.
Our method per person detectors have been trained on large image datasets within Multiple Instance Learning framework.
We show how we can apply our method in cases where the standard Multiple Instance Learning assumption, that each bag contains at least one instance with the specified label, is invalid.
arXiv Detail & Related papers (2020-07-21T10:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.