"There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit
- URL: http://arxiv.org/abs/2311.12702v4
- Date: Tue, 2 Jul 2024 22:13:08 GMT
- Title: "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit
- Authors: Travis Lloyd, Joseph Reagle, Mor Naaman,
- Abstract summary: We focus on online community moderators' experiences with AI-generated content (AIGC)
Our study finds communities are choosing to enact rules restricting use of AIGC for both ideological and practical reasons.
Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change.
- Score: 5.202496456440801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI has begun to alter how we work, learn, communicate, and participate in online communities. How might our online communities be changed by generative AI? To start addressing this question, we focused on online community moderators' experiences with AI-generated content (AIGC). We performed fifteen in-depth, semi-structured interviews with community moderators on the social sharing site Reddit to understand their attitudes towards AIGC and how their communities are responding. Our study finds communities are choosing to enact rules restricting use of AIGC for both ideological and practical reasons. We find that, despite the absence of foolproof tools for detecting AIGC, moderators were able to somewhat limit the disruption caused by this new phenomenon by working with their communities to clarify norms about AIGC use. However, moderators found enforcing AIGC restrictions challenging, and had to rely on time-intensive and inaccurate detection heuristics in their efforts. Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change, and suggest potential design solutions that may help.
Related papers
- AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content [7.978143550070664]
We collected the metadata and community rules for over 300,000 public subreddits.
We labeled subreddits and AI rules according to existing literature and a new taxonomy specific to AI rules.
Our findings illustrate the emergence of varied concerns about AI, in different community contexts.
arXiv Detail & Related papers (2024-10-15T15:31:41Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits [13.343255875002459]
We take steps towards examining the potential impact of AI-generated content on art-related communities on Reddit.
We look at image-based posts made to these communities that are transparently created by AI, or comments in these communities that suspect authors of using generative AI.
We show that AI content is more readily used by newcomers and may help increase participation if it aligns with community rules.
arXiv Detail & Related papers (2024-10-09T17:41:13Z) - Challenges and Remedies to Privacy and Security in AIGC: Exploring the
Potential of Privacy Computing, Blockchain, and Beyond [17.904983070032884]
We review the concept, classification and underlying technologies of AIGC.
We discuss the privacy and security challenges faced by AIGC from multiple perspectives.
arXiv Detail & Related papers (2023-06-01T07:49:22Z) - A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions [19.50785795365068]
AIGC uses generative large AI algorithms to assist humans in creating massive, high-quality, and human-like content at a faster pace and lower cost.
This paper presents an in-depth survey of working principles, security and privacy threats, state-of-the-art solutions, and future challenges of the AIGC paradigm.
arXiv Detail & Related papers (2023-05-25T15:09:11Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - A Complete Survey on Generative AI (AIGC): Is ChatGPT from GPT-4 to
GPT-5 All You Need? [112.12974778019304]
generative AI (AIGC, a.k.a AI-generated content) has made headlines everywhere because of its ability to analyze and create text, images, and beyond.
In the era of AI transitioning from pure analysis to creation, it is worth noting that ChatGPT, with its most recent language model GPT-4, is just a tool out of numerous AIGC tasks.
This work focuses on the technological development of various AIGC tasks based on their output type, including text, images, videos, 3D content, etc.
arXiv Detail & Related papers (2023-03-21T10:09:47Z) - A Pathway Towards Responsible AI Generated Content [68.13835802977125]
We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
arXiv Detail & Related papers (2023-03-02T14:58:40Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Mediating Community-AI Interaction through Situated Explanation: The
Case of AI-Led Moderation [32.50902508512016]
We theorize how explanation is situated in a community's shared values, norms, knowledge, and practices.
We then present a case study of AI-led moderation, where community members collectively develop explanations of AI-led decisions.
arXiv Detail & Related papers (2020-08-19T00:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.