When Kids Mode Isn't For Kids: Investigating TikTok's "Under 13 Experience"
- URL: http://arxiv.org/abs/2507.00299v1
- Date: Mon, 30 Jun 2025 22:31:31 GMT
- Title: When Kids Mode Isn't For Kids: Investigating TikTok's "Under 13 Experience"
- Authors: Olivia Figueira, Pranathi Chamarthi, Tu Le, Athina Markopoulou,
- Abstract summary: TikTok, the social media platform, offers a more restrictive "Under 13 Experience" exclusively for young users in the US, also known as TikTok's "Kids Mode"<n>While prior research has studied various aspects of TikTok's regular mode, TikTok's Kids Mode remains understudied.<n>We find that 83% of videos observed on the "For You" page in Kids Mode are actually not child-directed, and even inappropriate content was found.
- Score: 3.7436113672723534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: TikTok, the social media platform that is popular among children and adolescents, offers a more restrictive "Under 13 Experience" exclusively for young users in the US, also known as TikTok's "Kids Mode". While prior research has studied various aspects of TikTok's regular mode, including privacy and personalization, TikTok's Kids Mode remains understudied, and there is a lack of transparency regarding its content curation and its safety and privacy protections for children. In this paper, (i) we propose an auditing methodology to comprehensively investigate TikTok's Kids Mode and (ii) we apply it to characterize the platform's content curation and determine the prevalence of child-directed content, based on regulations in the Children's Online Privacy Protection Act (COPPA). We find that 83% of videos observed on the "For You" page in Kids Mode are actually not child-directed, and even inappropriate content was found. The platform also lacks critical features, namely parental controls and accessibility settings. Our findings have important design and regulatory implications, as children may be incentivized to use TikTok's regular mode instead of Kids Mode, where they are known to be exposed to further safety and privacy risks.
Related papers
- Adultification Bias in LLMs and Text-to-Image Models [55.02903075972816]
We study bias along axes of race and gender in young girls.<n>We focus on "adultification bias," a phenomenon in which Black girls are presumed to be more defiant, sexually intimate, and culpable than their White peers.
arXiv Detail & Related papers (2025-06-08T21:02:33Z) - Protecting Young Users on Social Media: Evaluating the Effectiveness of Content Moderation and Legal Safeguards on Video Sharing Platforms [0.8198234257428011]
We evaluated the effectiveness of video moderation for different age groups on TikTok, YouTube, and Instagram.<n>For passive scrolling, accounts assigned to the age 13 group encountered videos that were deemed harmful more frequently and quickly than those assigned to the age 18 group.<n>Exposure occurred without user-initiated searches, indicating weaknesses in the algorithmic filtering systems.
arXiv Detail & Related papers (2025-05-16T12:06:42Z) - TikGuard: A Deep Learning Transformer-Based Solution for Detecting Unsuitable TikTok Content for Kids [0.0]
This paper introduces TikGuard, a transformer-based deep learning approach aimed at detecting and flagging content unsuitable for children on TikTok.
By using a specially curated dataset, TikHarm, and leveraging advanced video classification techniques, TikGuard achieves an accuracy of 86.7%.
While direct comparisons are limited by the uniqueness of the TikHarm dataset, TikGuard's performance highlights its potential in enhancing content moderation.
arXiv Detail & Related papers (2024-10-01T05:00:05Z) - More Skin, More Likes! Measuring Child Exposure and User Engagement on TikTok [0.0]
Study investigates children's exposure on TikTok.
Analyzing 432,178 comments across 5,896 videos from 115 user accounts featuring children.
arXiv Detail & Related papers (2024-08-10T19:44:12Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Towards Usable Parental Control for Voice Assistants [6.827452316943251]
We conduct a parent survey to find out what they like and dislike about the current parental control features.
We find that parents need more visuals about their children's activity, easier access to security features for their children, and a better user interface.
arXiv Detail & Related papers (2023-03-09T00:26:42Z) - Age Appropriate Design: Assessment of TikTok, Twitch, and YouTube Kids [0.0]
We present an analysis of 15 ICO criteria for age appropriate design.
Our findings suggest that some criteria such as age verification and transparency provide adequate guidance for assessment.
Our findings regarding the platforms themselves suggest that they choose to implement the simplest form of self-declared age verification.
arXiv Detail & Related papers (2022-08-04T13:02:21Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - An Empirical Investigation of Personalization Factors on TikTok [77.34726150561087]
Despite the importance of TikTok's algorithm to the platform's success and content distribution, little work has been done on the empirical analysis of the algorithm.
Using a sock-puppet audit methodology with a custom algorithm developed by us, we tested and analysed the effect of the language and location used to access TikTok.
We identify that the follow-feature has the strongest influence, followed by the like-feature and video view rate.
arXiv Detail & Related papers (2022-01-28T17:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.