AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content
- URL: http://arxiv.org/abs/2410.11698v1
- Date: Tue, 15 Oct 2024 15:31:41 GMT
- Title: AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content
- Authors: Travis Lloyd, Jennah Gosciak, Tung Nguyen, Mor Naaman,
- Abstract summary: We collected the metadata and community rules for over 300,000 public subreddits.
We labeled subreddits and AI rules according to existing literature and a new taxonomy specific to AI rules.
Our findings illustrate the emergence of varied concerns about AI, in different community contexts.
- Score: 7.978143550070664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How are Reddit communities responding to AI-generated content? We explored this question through a large-scale analysis of subreddit community rules and their change over time. We collected the metadata and community rules for over 300,000 public subreddits and measured the prevalence of rules governing AI. We labeled subreddits and AI rules according to existing taxonomies from the HCI literature and a new taxonomy we developed specific to AI rules. While rules about AI are still relatively uncommon, the number of subreddits with these rules almost doubled over the course of a year. AI rules are also more common in larger subreddits and communities focused on art or celebrity topics, and less common in those focused on social support. These rules often focus on AI images and evoke, as justification, concerns about quality and authenticity. Overall, our findings illustrate the emergence of varied concerns about AI, in different community contexts.
Related papers
- Could AI Trace and Explain the Origins of AI-Generated Images and Text? [53.11173194293537]
AI-generated content is increasingly prevalent in the real world.
adversaries might exploit large multimodal models to create images that violate ethical or legal standards.
Paper reviewers may misuse large language models to generate reviews without genuine intellectual effort.
arXiv Detail & Related papers (2025-04-05T20:51:54Z) - Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents [29.070947259551478]
We analyzed 202 real-world AI privacy and ethical incidents.
This produced a taxonomy that classifies incident types across AI lifecycle stages.
It accounts for contextual factors such as causes, responsible entities, disclosure sources, and impacts.
arXiv Detail & Related papers (2025-03-28T21:57:38Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
Misclassification can lead to false plagiarism accusations and misleading claims about AI prevalence in online content.
We systematically evaluate eleven state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.
Our findings reveal that detectors frequently misclassify even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - Training AI to be Loyal [48.87020976513128]
An AI is loyal to a community if the community has ownership, alignment, and control.
Community owned models can only be used with the approval of the community and share the economic rewards communally.
Since we would like permissionless access to the loyal AI's community, we need the AI to be open source.
arXiv Detail & Related papers (2025-01-27T19:11:19Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Examining the Prevalence and Dynamics of AI-Generated Media in Art Subreddits [13.343255875002459]
We take steps towards examining the potential impact of AI-generated content on art-related communities on Reddit.
We look at image-based posts made to these communities that are transparently created by AI, or comments in these communities that suspect authors of using generative AI.
We show that AI content is more readily used by newcomers and may help increase participation if it aligns with community rules.
arXiv Detail & Related papers (2024-10-09T17:41:13Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit [5.202496456440801]
We focus on online community moderators' experiences with AI-generated content (AIGC)
Our study finds communities are choosing to enact rules restricting use of AIGC for both ideological and practical reasons.
Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change.
arXiv Detail & Related papers (2023-11-21T16:15:21Z) - Governance of the AI, by the AI, and for the AI [9.653656920225858]
Authors believe the age of artificial intelligence has, indeed, finally arrived.
Current state of AI is ushering in profound changes to many different sectors of society.
arXiv Detail & Related papers (2023-05-04T03:29:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Ever heard of ethical AI? Investigating the salience of ethical AI
issues among the German population [0.0]
General interest in AI and a higher educational level are predictive of some engagement with AI.
Ethical issues are voiced only by a small subset of citizens with fairness, accountability, and transparency being the least mentioned ones.
Once ethical AI is top of the mind, there is some potential for activism.
arXiv Detail & Related papers (2022-07-28T13:46:13Z) - AI Challenges for Society and Ethics [0.0]
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing.
The role of AI governance is ultimately to take practical steps to mitigate this risk of harm while enabling the benefits of innovation in AI.
It also requires thinking through the normative question of what beneficial use of AI in society looks like, which is equally challenging.
arXiv Detail & Related papers (2022-06-22T13:33:11Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Could regulating the creators deliver trustworthy AI? [2.588973722689844]
AI is becoming all pervasive and is often deployed in everyday technologies, devices and services without our knowledge.
Fear is compounded by the inability to point to a trustworthy source of AI.
Some consider trustworthy AI to be that which complies with relevant laws.
Others point to the requirement to comply with ethics and standards.
arXiv Detail & Related papers (2020-06-26T01:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.