Age Appropriate Design: Assessment of TikTok, Twitch, and YouTube Kids
- URL: http://arxiv.org/abs/2208.02638v1
- Date: Thu, 4 Aug 2022 13:02:21 GMT
- Title: Age Appropriate Design: Assessment of TikTok, Twitch, and YouTube Kids
- Authors: Virginia N. L. Franqueira, Jessica A. Annor and Ozgur Kafali
- Abstract summary: We present an analysis of 15 ICO criteria for age appropriate design.
Our findings suggest that some criteria such as age verification and transparency provide adequate guidance for assessment.
Our findings regarding the platforms themselves suggest that they choose to implement the simplest form of self-declared age verification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The presence of children in the online world is increasing at a rapid pace.
As children interact with services such as video sharing, live streaming, and
gaming, a number of concerns arise regarding their security and privacy as well
as their safety. To address such concerns, the UK's Information Commissioner's
Office (ICO) sets out 15 criteria alongside a risk management process for
developers of online services for children. We present an analysis of 15 ICO
criteria for age appropriate design. More precisely, we investigate whether
those criteria provide actionable requirements for developers and whether video
sharing and live streaming platforms that are used by children of different age
ranges (i.e., TikTok, Twitch and YouTube Kids) comply with them. Our findings
regarding the ICO criteria suggest that some criteria such as age verification
and transparency provide adequate guidance for assessment whereas other
criteria such as parental controls, reporting of inappropriate content, and
handling of sensitive data need further clarification. Our findings regarding
the platforms themselves suggest that they choose to implement the simplest
form of self-declared age verification with limited parental controls and
plenty of opportunities.
Related papers
- Catching Dark Signals in Algorithms: Unveiling Audiovisual and Thematic Markers of Unsafe Content Recommended for Children and Teenagers [13.39320891153433]
The prevalence of short form video platforms, combined with the ineffectiveness of age verification mechanisms, raises concerns about the potential harms facing children and teenagers in an algorithm-moderated online environment.<n>We conducted multimodal feature analysis and thematic topic modeling of 4,492 short videos recommended to children and teenagers on Instagram Reels, TikTok, and YouTube Shorts.<n>This feature-level and content-level analysis revealed that unsafe (i.e., problematic, mentally distressing) short videos possess darker visual features and contain explicitly harmful content and implicit harm from anxiety-inducing ordinary content.
arXiv Detail & Related papers (2025-07-16T18:41:42Z) - When Kids Mode Isn't For Kids: Investigating TikTok's "Under 13 Experience" [3.7436113672723534]
TikTok, the social media platform, offers a more restrictive "Under 13 Experience" exclusively for young users in the US, also known as TikTok's "Kids Mode"<n>While prior research has studied various aspects of TikTok's regular mode, TikTok's Kids Mode remains understudied.<n>We find that 83% of videos observed on the "For You" page in Kids Mode are actually not child-directed, and even inappropriate content was found.
arXiv Detail & Related papers (2025-06-30T22:31:31Z) - Protecting Young Users on Social Media: Evaluating the Effectiveness of Content Moderation and Legal Safeguards on Video Sharing Platforms [0.8198234257428011]
We evaluated the effectiveness of video moderation for different age groups on TikTok, YouTube, and Instagram.<n>For passive scrolling, accounts assigned to the age 13 group encountered videos that were deemed harmful more frequently and quickly than those assigned to the age 18 group.<n>Exposure occurred without user-initiated searches, indicating weaknesses in the algorithmic filtering systems.
arXiv Detail & Related papers (2025-05-16T12:06:42Z) - Multimodal Chain-of-Thought Reasoning via ChatGPT to Protect Children from Age-Inappropriate Apps [11.48782824226389]
Maturity rating offers a quick and effective method for guardians to assess the maturity levels of apps.
There are few text-mining-based approaches to maturity rating.
We present a framework for determining app maturity levels that utilize multimodal large language models.
arXiv Detail & Related papers (2024-07-08T18:20:10Z) - Ah, that's the great puzzle: On the Quest of a Holistic Understanding of the Harms of Recommender Systems on Children [2.0718016474717196]
Children come across various media items online, many of which are selected by recommender systems (RS) primarily designed for adults.
This raises questions about whether such content is appropriate given children's vulnerable stages of development and the potential risks to their well-being.
We advocate for researchers, practitioners, and policymakers to undertake a more comprehensive examination of the impact of RS on children.
arXiv Detail & Related papers (2024-05-03T12:30:27Z) - Security Advice for Parents and Children About Content Filtering and
Circumvention as Found on YouTube and TikTok [2.743215038883957]
We examine the advice available to parents and children regarding content filtering and circumvention as found on YouTube and TikTok.
Our results show that of these videos, roughly three-quarters are accurate, with the remaining one-fourth containing factually incorrect advice.
We find that videos targeting children are both more likely to be incorrect and actionable than videos targeting parents, leaving children at increased risk of taking harmful action.
arXiv Detail & Related papers (2024-02-05T18:12:33Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - GOAL: A Challenging Knowledge-grounded Video Captioning Benchmark for
Real-time Soccer Commentary Generation [75.60413443783953]
We present GOAL, a benchmark of over 8.9k soccer video clips, 22k sentences, and 42k knowledge triples for proposing a challenging new task setting as Knowledge-grounded Video Captioning (KGVC)
Our data and code are available at https://github.com/THU-KEG/goal.
arXiv Detail & Related papers (2023-03-26T08:43:36Z) - Explainable Abuse Detection as Intent Classification and Slot Filling [66.80201541759409]
We introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone.
We show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.
arXiv Detail & Related papers (2022-10-06T03:33:30Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - "Money makes the world go around'': Identifying Barriers to Better
Privacy in Children's Apps From Developers' Perspectives [28.40988446675355]
The industry for children's apps is thriving at the cost of children's privacy.
These apps routinely disclose children's data to multiple data trackers and ad networks.
We used a mixed-methods approach to investigate why this is happening and how developers might change their practices.
arXiv Detail & Related papers (2021-11-29T15:27:55Z) - Hate, Obscenity, and Insults: Measuring the Exposure of Children to
Inappropriate Comments in YouTube [8.688428251722911]
In this paper, we investigate the exposure of young users to inappropriate comments posted on YouTube videos targeting this demographic.
We collected a large-scale dataset of approximately four million records and studied the presence of five age-inappropriate categories and the amount of exposure to each category.
Using natural language processing and machine learning techniques, we constructed ensemble classifiers that achieved high accuracy in detecting inappropriate comments.
arXiv Detail & Related papers (2021-03-03T20:15:22Z) - Automatic Recommendation of Strategies for Minimizing Discomfort in
Virtual Environments [58.720142291102135]
In this work, we first present a detailed review about possible causes of Cybersickness (CS)
Our system is able to suggest if the user may be entering in the next moments of the application into an illness situation.
The CSPQ (Cybersickness Profile Questionnaire) is also proposed, which is used to identify the player's susceptibility to CS.
arXiv Detail & Related papers (2020-06-27T19:28:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.