AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content
- URL: http://arxiv.org/abs/2410.11698v1
- Date: Tue, 15 Oct 2024 15:31:41 GMT
- Title: AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content
- Authors: Travis Lloyd, Jennah Gosciak, Tung Nguyen, Mor Naaman,
- Abstract summary: We collected the metadata and community rules for over 300,000 public subreddits.
We labeled subreddits and AI rules according to existing literature and a new taxonomy specific to AI rules.
Our findings illustrate the emergence of varied concerns about AI, in different community contexts.
- Score: 7.978143550070664
- License:
- Abstract: How are Reddit communities responding to AI-generated content? We explored this question through a large-scale analysis of subreddit community rules and their change over time. We collected the metadata and community rules for over 300,000 public subreddits and measured the prevalence of rules governing AI. We labeled subreddits and AI rules according to existing taxonomies from the HCI literature and a new taxonomy we developed specific to AI rules. While rules about AI are still relatively uncommon, the number of subreddits with these rules almost doubled over the course of a year. AI rules are also more common in larger subreddits and communities focused on art or celebrity topics, and less common in those focused on social support. These rules often focus on AI images and evoke, as justification, concerns about quality and authenticity. Overall, our findings illustrate the emergence of varied concerns about AI, in different community contexts.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits [13.343255875002459]
We take steps towards examining the potential impact of AI-generated content on art-related communities on Reddit.
We look at image-based posts made to these communities that are transparently created by AI, or comments in these communities that suspect authors of using generative AI.
We show that AI content is more readily used by newcomers and may help increase participation if it aligns with community rules.
arXiv Detail & Related papers (2024-10-09T17:41:13Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit [5.202496456440801]
We focus on online community moderators' experiences with AI-generated content (AIGC)
Our study finds communities are choosing to enact rules restricting use of AIGC for both ideological and practical reasons.
Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change.
arXiv Detail & Related papers (2023-11-21T16:15:21Z) - Governance of the AI, by the AI, and for the AI [9.653656920225858]
Authors believe the age of artificial intelligence has, indeed, finally arrived.
Current state of AI is ushering in profound changes to many different sectors of society.
arXiv Detail & Related papers (2023-05-04T03:29:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Could regulating the creators deliver trustworthy AI? [2.588973722689844]
AI is becoming all pervasive and is often deployed in everyday technologies, devices and services without our knowledge.
Fear is compounded by the inability to point to a trustworthy source of AI.
Some consider trustworthy AI to be that which complies with relevant laws.
Others point to the requirement to comply with ethics and standards.
arXiv Detail & Related papers (2020-06-26T01:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.