User Negotiations of Authenticity, Ownership, and Governance on AI-Generated Video Platforms: Evidence from Sora
- URL: http://arxiv.org/abs/2512.05519v1
- Date: Fri, 05 Dec 2025 08:23:27 GMT
- Title: User Negotiations of Authenticity, Ownership, and Governance on AI-Generated Video Platforms: Evidence from Sora
- Authors: Bohui Shen, Shrikar Bhatta, Alex Ireebanije, Zexuan Liu, Abhinav Choudhry, Ece Gumusel, Kyrie Zhixuan Zhou,
- Abstract summary: This study examines how users make sense of AI-generated videos on OpenAI's Sora.<n>We identify four dynamics that characterize how users negotiate authenticity, authorship, and platform governance.
- Score: 3.6795902817860693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI-generated video platforms rapidly advance, ethical challenges such as copyright infringement emerge. This study examines how users make sense of AI-generated videos on OpenAI's Sora by conducting a qualitative content analysis of user comments. Through a thematic analysis, we identified four dynamics that characterize how users negotiate authenticity, authorship, and platform governance on Sora. First, users acted as critical evaluators of realism, assessing micro-details such as lighting, shadows, fluid motion, and physics to judge whether AI-generated scenes could plausibly exist. Second, users increasingly shifted from passive viewers to active creators, expressing curiosity about prompts, techniques, and creative processes. Text prompts were perceived as intellectual property, generating concerns about plagiarism and remixing norms. Third, users reported blurred boundaries between real and synthetic media, worried about misinformation, and even questioned the authenticity of other commenters, suspecting bot-generated engagement. Fourth, users contested platform governance: some perceived moderation as inconsistent or opaque, while others shared tactics for evading prompt censorship through misspellings, alternative phrasing, emojis, or other languages. Despite this, many users also enforced ethical norms by discouraging the misuse of real people's images or disrespectful content. Together, these patterns highlighted how AI-mediated platforms complicate notions of reality, creativity, and rule-making in emerging digital ecosystems. Based on the findings, we discuss governance challenges in Sora and how user negotiations inform future platform governance.
Related papers
- Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning [66.51617619673587]
We present Skyra, a specialized large language model (MLLM) that identifies human-perceivable visual artifacts in AI-generated videos.<n>To support this objective, we construct ViF-CoT-4K for Supervised Fine-Tuning (SFT), which represents the first large-scale AI-generated video dataset with fine-grained human annotations.<n>We then develop a two-stage training strategy that systematically enhances our model's artifact's-temporal perception, explanation capability, and detection accuracy.
arXiv Detail & Related papers (2025-12-17T18:48:26Z) - A New Digital Divide? Coder Worldviews, the Slop Economy, and Democracy in the Age of AI [0.0]
We present an original survey of software developers in Silicon Valley.<n>Results indicate that most developers recognize the power of their products to influence civil liberties and political discourse.<n>We investigate these findings in the context of an emerging new digital divide, not of internet access but of information quality.
arXiv Detail & Related papers (2025-10-06T12:32:37Z) - To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms [0.1274452325287335]
This paper investigates social media recommendations from an end user perspective.<n>We asked participants about the social media content suggestions, their comprehensibility, and explainability.<n>Our analysis shows users mostly require explanation whenever they encounter unfamiliar content.
arXiv Detail & Related papers (2025-08-13T01:05:49Z) - DAVID-XR1: Detecting AI-Generated Videos with Explainable Reasoning [58.70446237944036]
DAVID-X is the first dataset to pair AI-generated videos with detailed defect-level, temporal-spatial annotations and written rationales.<n>We present DAVID-XR1, a video-language model designed to deliver an interpretable chain of visual reasoning.<n>Our results highlight the promise of explainable detection methods for trustworthy identification of AI-generated video content.
arXiv Detail & Related papers (2025-06-13T13:39:53Z) - Analyzing User Perceptions of Large Language Models (LLMs) on Reddit: Sentiment and Topic Modeling of ChatGPT and DeepSeek Discussions [0.0]
This study aims at analyzing Reddit discussions about ChatGPT and DeepSeek using sentiment and topic modeling.<n>Report mentions whether users have faith in the technology and what they see as its future.
arXiv Detail & Related papers (2025-02-22T17:00:42Z) - A Comprehensive Content Verification System for ensuring Digital Integrity in the Age of Deep Fakes [0.0]
This paper discusses a solution, a Content Verification System, designed to authenticate images and videos shared as posts or stories across the digital landscape.<n>Going beyond the limitations of blue ticks, this system empowers individuals and influencers to validate the authenticity of their digital footprint, safeguarding their reputation in an interconnected world.
arXiv Detail & Related papers (2024-11-29T14:47:47Z) - "Sora is Incredible and Scary": Emerging Governance Challenges of Text-to-Video Generative AI Models [1.4999444543328293]
We report a qualitative social media analysis aiming to uncover people's perceived impact of and concerns about Sora's integration.
We found that people were most concerned about Sora's impact on content creation-related industries.
Potential regulatory solutions included law-enforced labeling of AI content and AI literacy education for the public.
arXiv Detail & Related papers (2024-04-10T02:03:59Z) - The Face of Populism: Examining Differences in Facial Emotional Expressions of Political Leaders Using Machine Learning [50.24983453990065]
We use a deep-learning approach to process a sample of 220 YouTube videos of political leaders from 15 different countries.<n>We observe statistically significant differences in the average score of negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.