AURA: Amplifying Understanding, Resilience, and Awareness for Responsible AI Content Work
- URL: http://arxiv.org/abs/2411.01426v1
- Date: Sun, 03 Nov 2024 03:27:02 GMT
- Title: AURA: Amplifying Understanding, Resilience, and Awareness for Responsible AI Content Work
- Authors: Alice Qian Zhang, Judith Amores, Mary L. Gray, Mary Czerwinski, Jina Suh,
- Abstract summary: This study investigates the nature and challenges of content work that supports responsible AI (RAI) efforts.
We develop a conceptualization of RAI content work and a framework of recommendations for providing holistic support for content workers.
We discuss how our framework may guide future innovation to support the well-being and professional development of the RAI content workforce.
- Score: 9.15754890995565
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Behind the scenes of maintaining the safety of technology products from harmful and illegal digital content lies unrecognized human labor. The recent rise in the use of generative AI technologies and the accelerating demands to meet responsible AI (RAI) aims necessitates an increased focus on the labor behind such efforts in the age of AI. This study investigates the nature and challenges of content work that supports RAI efforts, or "RAI content work," that span content moderation, data labeling, and red teaming -- through the lived experiences of content workers. We conduct a formative survey and semi-structured interview studies to develop a conceptualization of RAI content work and a subsequent framework of recommendations for providing holistic support for content workers. We validate our recommendations through a series of workshops with content workers and derive considerations for and examples of implementing such recommendations. We discuss how our framework may guide future innovation to support the well-being and professional development of the RAI content workforce.
Related papers
- Worker Discretion Advised: Co-designing Risk Disclosure in Crowdsourced Responsible AI (RAI) Content Work [12.492380198885295]
Responsible AI (RAI) content work often exposes crowd workers to potentially harmful content.<n>We conduct co-design sessions with 29 task designers, workers, and platform representatives.<n>We identify design tensions and map the sociotechnical tradeoffs that shape disclosure practices.
arXiv Detail & Related papers (2025-09-15T17:05:34Z) - Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation of LLM [13.066526969147501]
Large Language Models (LLMs) have revolutionized content creation across digital platforms.<n>LLMs enable beneficial applications such as content generation, question and answering (Q&A), programming, and code reasoning.<n>They also pose serious risks by inadvertently or intentionally producing toxic, offensive, or biased content.
arXiv Detail & Related papers (2025-08-07T18:42:16Z) - Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work [7.749785705236486]
Crowd workers are often tasked with responsible AI (RAI) content work.<n>While prior efforts have highlighted the risks to worker well-being associated with RAI content work, far less attention has been paid to how these risks are communicated to workers.<n>This study investigates how task designers approach risk disclosure in crowdsourced RAI tasks.
arXiv Detail & Related papers (2025-05-30T06:08:50Z) - A Practical Synthesis of Detecting AI-Generated Textual, Visual, and Audio Content [2.3543188414616534]
Advances in AI-generated content have led to wide adoption of large language models, diffusion-based visual generators, and synthetic audio tools.
These developments raise concerns about misinformation, copyright infringement, security threats, and the erosion of public trust.
This paper explores an extensive range of methods designed to detect and mitigate AI-generated textual, visual, and audio content.
arXiv Detail & Related papers (2025-04-02T23:27:55Z) - Towards Responsible AI Music: an Investigation of Trustworthy Features for Creative Systems [1.976667849039851]
Generative AI is radically changing the creative arts, by fundamentally transforming the way we create and interact with cultural artefacts.
This technology also raises ethical, societal, and legal concerns.
Key among these are the potential displacement of human creativity, copyright infringement stemming from vast training datasets, and the lack of transparency, explainability, and fairness mechanisms.
arXiv Detail & Related papers (2025-03-24T15:54:47Z) - Retrieval Augmented Generation and Understanding in Vision: A Survey and New Outlook [85.43403500874889]
Retrieval-augmented generation (RAG) has emerged as a pivotal technique in artificial intelligence (AI)
Recent advancements in RAG for embodied AI, with a particular focus on applications in planning, task execution, multimodal perception, interaction, and specialized domains.
arXiv Detail & Related papers (2025-03-23T10:33:28Z) - Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey [92.36487127683053]
Retrieval-Augmented Generation (RAG) is an advanced technique designed to address the challenges of Artificial Intelligence-Generated Content (AIGC)
RAG provides reliable and up-to-date external knowledge, reduces hallucinations, and ensures relevant context across a wide range of tasks.
Despite RAG's success and potential, recent studies have shown that the RAG paradigm also introduces new risks, including privacy concerns, adversarial attacks, and accountability issues.
arXiv Detail & Related papers (2025-02-08T06:50:47Z) - Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future [7.976680307696195]
Responsible Artificial Intelligence (RAI) has emerged as a crucial framework for addressing ethical concerns in the development and deployment of AI systems.
This article examines the challenges and opportunities in implementing ethical, transparent, and accountable AI systems in the post-ChatGPT era.
arXiv Detail & Related papers (2025-01-15T20:59:42Z) - Using Case Studies to Teach Responsible AI to Industry Practitioners [8.152080071643685]
We propose a novel stakeholder-first educational approach that uses interactive case studies to achieve organizational and practitioner -level engagement and advance learning of Responsible AI (RAI)
Our assessment results indicate that participants found the workshops engaging and reported a positive shift in understanding and motivation to apply RAI to their work.
arXiv Detail & Related papers (2024-07-19T22:06:06Z) - Teaching Design Science as a Method for Effective Research Development [0.24578723416255752]
Applying Design Science Research (DSR) methodology is becoming a popular working resource for Information Systems (IS) and Software engineering studies.
This chapter includes examples of DSR, a teaching methodology, learning objectives, and recommendations.
We have created a survey artifact intended to gather data on the experiences of design science users.
arXiv Detail & Related papers (2024-07-13T10:43:06Z) - Networking Systems for Video Anomaly Detection: A Tutorial and Survey [55.28514053969056]
Video Anomaly Detection (VAD) is a fundamental research task within the Artificial Intelligence (AI) community.
In this article, we delineate the foundational assumptions, learning frameworks, and applicable scenarios of various deep learning-driven VAD routes.
We showcase our latest NSVAD research in industrial IoT and smart cities, along with an end-cloud collaborative architecture for deployable NSVAD.
arXiv Detail & Related papers (2024-05-16T02:00:44Z) - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation [0.26107298043931204]
Generative AI has ushered in the ability to generate content that closely mimics human contributions.
These models can be used to manipulate public opinion and distort perceptions, resulting in a decline in trust towards digital platforms.
This study contributes to marketing literature and practice in three ways.
arXiv Detail & Related papers (2024-03-17T13:08:28Z) - Service Level Agreements and Security SLA: A Comprehensive Survey [51.000851088730684]
This survey paper identifies state of the art covering concepts, approaches, and open problems of SLA management.
It contributes by carrying out a comprehensive review and covering the gap between the analyses proposed in existing surveys and the most recent literature on this topic.
It proposes a novel classification criterium to organize the analysis based on SLA life cycle phases.
arXiv Detail & Related papers (2024-01-31T12:33:41Z) - Semantic Communications for Artificial Intelligence Generated Content
(AIGC) Toward Effective Content Creation [75.73229320559996]
This paper develops a conceptual model for the integration of AIGC and SemCom.
A novel framework that employs AIGC technology is proposed as an encoder and decoder for semantic information.
The framework can adapt to different types of content generated, the required quality, and the semantic information utilized.
arXiv Detail & Related papers (2023-08-09T13:17:21Z) - A Rapid Review of Responsible AI frameworks: How to guide the
development of ethical AI [1.3734044451150018]
We conduct a rapid review of several frameworks providing principles, guidelines, and/or tools to help practitioners in the development and deployment of Responsible AI (RAI) applications.
Our results reveal that there is not a "catching-all" framework supporting both technical and non-technical stakeholders in the implementation of real-world projects.
arXiv Detail & Related papers (2023-06-08T07:47:18Z) - Applying Standards to Advance Upstream & Downstream Ethics in Large
Language Models [0.0]
This paper explores how AI-owners can develop safeguards for AI-generated content.
It draws from established codes of conduct and ethical standards in other content-creation industries.
arXiv Detail & Related papers (2023-06-06T08:47:42Z) - Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions [59.34177693293227]
We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
arXiv Detail & Related papers (2023-05-19T10:43:06Z) - A Unified Framework for Integrating Semantic Communication and
AI-Generated Content in Metaverse [57.317580645602895]
Integrated Semantic Communication and AI-Generated Content (ISGC) has attracted a lot of attentions recently.
ISGC transfers semantic information from user inputs, generates digital content, and renders graphics for Metaverse.
We introduce a unified framework that captures ISGC two primary benefits, including integration gain for optimized resource allocation.
arXiv Detail & Related papers (2023-05-18T02:02:36Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.