Security Advice for Parents and Children About Content Filtering and
Circumvention as Found on YouTube and TikTok
- URL: http://arxiv.org/abs/2402.03255v1
- Date: Mon, 5 Feb 2024 18:12:33 GMT
- Title: Security Advice for Parents and Children About Content Filtering and
Circumvention as Found on YouTube and TikTok
- Authors: Ran Elgedawy, John Sadik, Anuj Gautam, Trinity Bissahoyo, Christopher
Childress, Jacob Leonard, Clay Shubert, Scott Ruoti
- Abstract summary: We examine the advice available to parents and children regarding content filtering and circumvention as found on YouTube and TikTok.
Our results show that of these videos, roughly three-quarters are accurate, with the remaining one-fourth containing factually incorrect advice.
We find that videos targeting children are both more likely to be incorrect and actionable than videos targeting parents, leaving children at increased risk of taking harmful action.
- Score: 2.743215038883957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In today's digital age, concerns about online security and privacy have
become paramount. However, addressing these issues can be difficult, especially
within the context of family relationships, wherein parents and children may
have conflicting interests. In this environment, parents and children may turn
to online security advice to determine how to proceed. In this paper, we
examine the advice available to parents and children regarding content
filtering and circumvention as found on YouTube and TikTok. In an analysis of
839 videos returned from queries on these topics, we found that half (n=399)
provide relevant advice. Our results show that of these videos, roughly
three-quarters are accurate, with the remaining one-fourth containing factually
incorrect advice. We find that videos targeting children are both more likely
to be incorrect and actionable than videos targeting parents, leaving children
at increased risk of taking harmful action. Moreover, we find that while advice
videos targeting parents will occasionally discuss the ethics of content
filtering and device monitoring (including recommendations to respect
children's autonomy) no such discussion of the ethics or risks of circumventing
content filtering is given to children, leaving them unaware of any risks that
may be involved with doing so. Ultimately, our research indicates that
video-based social media sites are already effective sources of security advice
propagation and that the public would benefit from security researchers and
practitioners engaging more with these platforms, both for the creation of
content and of tools designed to help with more effective filtering.
Related papers
- More Skin, More Likes! Measuring Child Exposure and User Engagement on TikTok [0.0]
Study investigates children's exposure on TikTok.
Analyzing 432,178 comments across 5,896 videos from 115 user accounts featuring children.
arXiv Detail & Related papers (2024-08-10T19:44:12Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - How to Train Your YouTube Recommender to Avoid Unwanted Videos [51.6864681332515]
"Not interested" and "Don't recommend channel" buttons allow users to indicate disinterest when presented with unwanted recommendations.
We simulated YouTube users with sock puppet agents.
We found that the "Not interested" button worked best, significantly reducing such recommendations in all topics tested.
arXiv Detail & Related papers (2023-07-27T00:21:29Z) - Malicious or Benign? Towards Effective Content Moderation for Children's
Videos [1.0323063834827415]
This paper introduces our toolkit Malicious or Benign for promoting research on automated content moderation of children's videos.
We present 1) a customizable annotation tool for videos, 2) a new dataset with difficult to detect test cases of malicious content, and 3) a benchmark suite of state-of-the-art video classification models.
arXiv Detail & Related papers (2023-05-24T20:33:38Z) - Multi-step Jailbreaking Privacy Attacks on ChatGPT [47.10284364632862]
We study the privacy threats from OpenAI's ChatGPT and the New Bing enhanced by ChatGPT.
We conduct extensive experiments to support our claims and discuss LLMs' privacy implications.
arXiv Detail & Related papers (2023-04-11T13:05:04Z) - 'Beach' to 'Bitch': Inadvertent Unsafe Transcription of Kids' Content on
YouTube [13.116806430326513]
Well-known automatic speech recognition (ASR) systems may produce text content highly inappropriate for kids while transcribing YouTube Kids' videos.
We release a first-of-its-kind data set of audios for which the existing state-of-the-art ASR systems hallucinate inappropriate content for kids.
arXiv Detail & Related papers (2022-02-17T19:19:09Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - Hate, Obscenity, and Insults: Measuring the Exposure of Children to
Inappropriate Comments in YouTube [8.688428251722911]
In this paper, we investigate the exposure of young users to inappropriate comments posted on YouTube videos targeting this demographic.
We collected a large-scale dataset of approximately four million records and studied the presence of five age-inappropriate categories and the amount of exposure to each category.
Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments.
arXiv Detail & Related papers (2021-03-03T20:15:22Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.