Malicious or Benign? Towards Effective Content Moderation for Children's
Videos
- URL: http://arxiv.org/abs/2305.15551v1
- Date: Wed, 24 May 2023 20:33:38 GMT
- Title: Malicious or Benign? Towards Effective Content Moderation for Children's
Videos
- Authors: Syed Hammad Ahmed, Muhammad Junaid Khan, H. M. Umer Qaisar and Gita
Sukthankar
- Abstract summary: This paper introduces our toolkit Malicious or Benign for promoting research on automated content moderation of children's videos.
We present 1) a customizable annotation tool for videos, 2) a new dataset with difficult to detect test cases of malicious content, and 3) a benchmark suite of state-of-the-art video classification models.
- Score: 1.0323063834827415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online video platforms receive hundreds of hours of uploads every minute,
making manual content moderation impossible. Unfortunately, the most vulnerable
consumers of malicious video content are children from ages 1-5 whose attention
is easily captured by bursts of color and sound. Scammers attempting to
monetize their content may craft malicious children's videos that are
superficially similar to educational videos, but include scary and disgusting
characters, violent motions, loud music, and disturbing noises. Prominent video
hosting platforms like YouTube have taken measures to mitigate malicious
content on their platform, but these videos often go undetected by current
content moderation tools that are focused on removing pornographic or
copyrighted content. This paper introduces our toolkit Malicious or Benign for
promoting research on automated content moderation of children's videos. We
present 1) a customizable annotation tool for videos, 2) a new dataset with
difficult to detect test cases of malicious content and 3) a benchmark suite of
state-of-the-art video classification models.
Related papers
- Towards Understanding Unsafe Video Generation [10.269782780518428]
Video generation models (VGMs) have demonstrated the capability to synthesize high-quality output.
We identify 5 unsafe video categories: Distorted/Weird, Terrifying, Pornographic, Violent/Bloody, and Political.
We then study possible defense mechanisms to prevent the generation of unsafe videos.
arXiv Detail & Related papers (2024-07-17T14:07:22Z) - Enhanced Multimodal Content Moderation of Children's Videos using Audiovisual Fusion [0.6963971634605796]
We present an efficient adaptation of CLIP that can leverage contextual audio cues for enhanced content moderation.
We conduct experiments on a multimodal version of the MOB (Malicious or Benign) dataset in supervised and few-shot settings.
arXiv Detail & Related papers (2024-05-09T22:19:40Z) - An Image is Worth a Thousand Toxic Words: A Metamorphic Testing
Framework for Content Moderation Software [64.367830425115]
Social media platforms are being increasingly misused to spread toxic content, including hate speech, malicious advertising, and pornography.
Despite tremendous efforts in developing and deploying content moderation methods, malicious users can evade moderation by embedding texts into images.
We propose a metamorphic testing framework for content moderation software.
arXiv Detail & Related papers (2023-08-18T20:33:06Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - YouTubers Not madeForKids: Detecting Channels Sharing Inappropriate
Videos Targeting Children [3.936965297430477]
We study YouTube channels found to post suitable or disturbing videos targeting kids in the past.
We identify a clear discrepancy between what YouTube assumes and flags as inappropriate content and channel, vs. what is found to be disturbing content and still available on the platform.
arXiv Detail & Related papers (2022-05-27T10:34:15Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - 'Beach' to 'Bitch': Inadvertent Unsafe Transcription of Kids' Content on
YouTube [13.116806430326513]
Well-known automatic speech recognition (ASR) systems may produce text content highly inappropriate for kids while transcribing YouTube Kids' videos.
We release a first-of-its-kind data set of audios for which the existing state-of-the-art ASR systems hallucinate inappropriate content for kids.
arXiv Detail & Related papers (2022-02-17T19:19:09Z) - VPN: Video Provenance Network for Robust Content Attribution [72.12494245048504]
We present VPN - a content attribution method for recovering provenance information from videos shared online.
We learn a robust search embedding for matching such video, using full-length or truncated video queries.
Once matched against a trusted database of video clips, associated information on the provenance of the clip is presented to the user.
arXiv Detail & Related papers (2021-09-21T09:07:05Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - Efficient video integrity analysis through container characterization [77.45740041478743]
We introduce a container-based method to identify the software used to perform a video manipulation.
The proposed method is both efficient and effective and can also provide a simple explanation for its decisions.
It achieves an accuracy of 97.6% in distinguishing pristine from tampered videos and classifying the editing software.
arXiv Detail & Related papers (2021-01-26T14:13:39Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.