"There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit
- URL: http://arxiv.org/abs/2311.12702v6
- Date: Wed, 18 Dec 2024 16:58:56 GMT
- Title: "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit
- Authors: Travis Lloyd, Joseph Reagle, Mor Naaman,
- Abstract summary: Generative AI has begun to alter how we work, learn, communicate, and participate in online communities.
We focus on online community moderators' experiences with AI-generated content (AIGC)
Our study finds that rules about AIGC are motivated by concerns about content quality, social dynamics, and governance challenges.
We find that, despite the absence of foolproof tools for detecting AIGC, moderators were able to somewhat limit the disruption caused by this new phenomenon by working with their communities to clarify norms.
- Score: 5.202496456440801
- License:
- Abstract: Generative AI has begun to alter how we work, learn, communicate, and participate in online communities. How might our online communities be changed by generative AI? To start addressing this question, we focused on online community moderators' experiences with AI-generated content (AIGC). We performed fifteen in-depth, semi-structured interviews with moderators of Reddit communities that restrict the use of AIGC. Our study finds that rules about AIGC are motivated by concerns about content quality, social dynamics, and governance challenges. Moderators fear that, without such rules, AIGC threatens to reduce their communities' utility and social value. We find that, despite the absence of foolproof tools for detecting AIGC, moderators were able to somewhat limit the disruption caused by this new phenomenon by working with their communities to clarify norms. However, moderators found enforcing AIGC restrictions challenging, and had to rely on time-intensive and inaccurate detection heuristics in their efforts. Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change, and suggest potential design solutions that may help.
Related papers
- AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content [7.978143550070664]
We collected the metadata and community rules for over $300,000$ public subreddits.
We labeled subreddits and AI rules according to existing literature and a new taxonomy specific to AI rules.
Our findings illustrate the emergence of varied concerns about AI, in different community contexts.
arXiv Detail & Related papers (2024-10-15T15:31:41Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits [13.343255875002459]
We take steps towards examining the potential impact of AI-generated content on art-related communities on Reddit.
We look at image-based posts made to these communities that are transparently created by AI, or comments in these communities that suspect authors of using generative AI.
We show that AI content is more readily used by newcomers and may help increase participation if it aligns with community rules.
arXiv Detail & Related papers (2024-10-09T17:41:13Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Challenges and Remedies to Privacy and Security in AIGC: Exploring the
Potential of Privacy Computing, Blockchain, and Beyond [17.904983070032884]
We review the concept, classification and underlying technologies of AIGC.
We discuss the privacy and security challenges faced by AIGC from multiple perspectives.
arXiv Detail & Related papers (2023-06-01T07:49:22Z) - A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions [19.50785795365068]
AIGC uses generative large AI algorithms to assist humans in creating massive, high-quality, and human-like content at a faster pace and lower cost.
This paper presents an in-depth survey of working principles, security and privacy threats, state-of-the-art solutions, and future challenges of the AIGC paradigm.
arXiv Detail & Related papers (2023-05-25T15:09:11Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - A Complete Survey on Generative AI (AIGC): Is ChatGPT from GPT-4 to
GPT-5 All You Need? [112.12974778019304]
generative AI (AIGC, a.k.a AI-generated content) has made headlines everywhere because of its ability to analyze and create text, images, and beyond.
In the era of AI transitioning from pure analysis to creation, it is worth noting that ChatGPT, with its most recent language model GPT-4, is just a tool out of numerous AIGC tasks.
This work focuses on the technological development of various AIGC tasks based on their output type, including text, images, videos, 3D content, etc.
arXiv Detail & Related papers (2023-03-21T10:09:47Z) - A Pathway Towards Responsible AI Generated Content [68.13835802977125]
We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
arXiv Detail & Related papers (2023-03-02T14:58:40Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.