`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
during Co-production of Responsible AI Values
- URL: http://arxiv.org/abs/2307.10221v1
- Date: Fri, 14 Jul 2023 21:57:46 GMT
- Title: `It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
during Co-production of Responsible AI Values
- Authors: Rama Adithya Varanasi, Nitesh Goyal
- Abstract summary: We interviewed 23 individuals, across 10 organizations, tasked to ship AI/ML based products while upholding RAI norms.
Top-down and bottom-up institutional structures create burden for different roles preventing them from upholding RAI values.
We recommend recommendations for inclusive and equitable RAI value-practices.
- Score: 4.091593765662773
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recently, the AI/ML research community has indicated an urgent need to
establish Responsible AI (RAI) values and practices as part of the AI/ML
lifecycle. Several organizations and communities are responding to this call by
sharing RAI guidelines. However, there are gaps in awareness, deliberation, and
execution of such practices for multi-disciplinary ML practitioners. This work
contributes to the discussion by unpacking co-production challenges faced by
practitioners as they align their RAI values. We interviewed 23 individuals,
across 10 organizations, tasked to ship AI/ML based products while upholding
RAI norms and found that both top-down and bottom-up institutional structures
create burden for different roles preventing them from upholding RAI values, a
challenge that is further exacerbated when executing conflicted values. We
share multiple value levers used as strategies by the practitioners to resolve
their challenges. We end our paper with recommendations for inclusive and
equitable RAI value-practices, creating supportive organizational structures
and opportunities to further aid practitioners.
Related papers
- Responsible AI in the Global Context: Maturity Model and Survey [0.3613661942047476]
Responsible AI (RAI) has emerged as a major focus across industry, policymaking, and academia.
This study explores the global state of RAI through one of the most extensive surveys to date on the topic.
We define a conceptual RAI maturity model for organizations to map how well they implement organizational and operational RAI measures.
arXiv Detail & Related papers (2024-10-13T20:04:32Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition [61.91764883512776]
We introduce an innovative PEFT method, TeamLoRA, consisting of a collaboration and competition module for experts.
By doing so, TeamLoRA connects the experts as a "Team" with internal collaboration and competition, enabling a faster and more accurate PEFT paradigm for multi-task learning.
arXiv Detail & Related papers (2024-08-19T09:58:53Z) - Using Case Studies to Teach Responsible AI to Industry Practitioners [8.152080071643685]
We propose a novel stakeholder-first educational approach that uses interactive case studies to achieve organizational and practitioner -level engagement and advance learning of Responsible AI (RAI)
Our assessment results indicate that participants found the workshops engaging and reported a positive shift in understanding and motivation to apply RAI to their work.
arXiv Detail & Related papers (2024-07-19T22:06:06Z) - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - Large Multimodal Agents: A Survey [78.81459893884737]
Large language models (LLMs) have achieved superior performance in powering text-based AI agents.
There is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain.
This review aims to provide valuable insights and guidelines for future research in this rapidly evolving field.
arXiv Detail & Related papers (2024-02-23T06:04:23Z) - Towards Equitable Agile Research and Development of AI and Robotics [0.0]
We propose a framework for adapting widely practiced Research and Development (R&D) project management methodologies to build organizational equity capabilities.
We describe how project teams can organize and operationalize the most promising practices, skill sets, organizational cultures, and methods to detect and address rights-based fairness, equity, accountability, and ethical problems.
arXiv Detail & Related papers (2024-02-13T06:13:17Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.