Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices
- URL: http://arxiv.org/abs/2501.16531v1
- Date: Mon, 27 Jan 2025 22:10:27 GMT
- Title: Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices
- Authors: Genevieve Smith, Natalia Luka, Merrick Osborne, Brian Lattimore, Jessica Newman, Brandie Nonnecke, Brent Mittelstadt,
- Abstract summary: generative AI (genAI) has rapidly become integrated into workplaces.<n>In this paper, we examine how product managers implement responsible practices in their day-to-day work when using genAI.
- Score: 0.657029444008632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since 2022, generative AI (genAI) has rapidly become integrated into workplaces. Though organizations have made commitments to use this technology "responsibly", how organizations and their employees prioritize responsibility in their decision-making remains absent from extant management theorizing. In this paper, we examine how product managers - who often serve as gatekeepers in decision-making processes - implement responsible practices in their day-to-day work when using genAI. Using Institutional Theory, we illuminate the factors that constrain or support proactive responsible development and usage of genAI technologies. We employ a mixed methods research design, drawing on 25 interviews with product managers and a global survey of 300 respondents in product management-related roles. The majority of our respondents report (1) widespread uncertainty regarding what "responsibility" means or looks like, (2) diffused responsibility given assumed ethical actions by other teams, (3) lack of clear incentives and guidance within organizations, and (4) the importance of leadership buy-in and principles for navigating tensions between ethical commitments and profit motives. However, our study finds that even in highly uncertain environments, absent guidance from leadership, product managers can "recouple" ethical commitments and practices by finding responsibility "micro-moments". Product managers seek out low-risk, small-scale actions they can take without explicit buy-in from higher-level managers, such as individual or team-wide checks and reviews and safeguarding standards for data. Our research highlights how genAI poses unique challenges to organizations trying to couple ethical principles and daily practices and the role that middle-level management can play in recoupling the two.
Related papers
- Responsible AI: The Good, The Bad, The AI [1.932555230783329]
This paper presents a comprehensive examination of AI's dual nature through the lens of strategic information systems.<n>We develop the Paradox-based Responsible AI Governance (PRAIG) framework that articulates: (1) the strategic benefits of AI adoption, (2) the inherent risks and unintended consequences, and (3) governance mechanisms that enable organizations to navigate these tensions.<n>The paper concludes with a research agenda for advancing responsible AI governance scholarship.
arXiv Detail & Related papers (2026-01-28T22:33:27Z) - From Values to Frameworks: A Qualitative Study of Ethical Reasoning in Agentic AI Practitioners [0.0]
Agentic artificial intelligence systems are autonomous technologies capable of pursuing complex goals with minimal human oversight.<n>While these systems promise major gains in productivity, they also raise new ethical challenges.<n>This paper investigates the ethical reasoning of AI practitioners through qualitative interviews centered on structured dilemmas in agentic AI deployment.
arXiv Detail & Related papers (2025-12-24T00:58:41Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Strategic Motivators for Ethical AI System Development: An Empirical and Holistic Model [2.5348859611493353]
This study aims to identify and prioritize the motivators that drive the ethical development of AI systems.<n>Twenty key motivators were identified and grouped into eight categories.<n> Fuzzy TOPSIS ranked motivators such as promoting team diversity, establishing AI governance bodies, appointing oversight leaders, and ensuring data privacy as most critical.
arXiv Detail & Related papers (2025-07-27T10:49:05Z) - Generative AI for Autonomous Driving: Frontiers and Opportunities [145.6465312554513]
This survey delivers a comprehensive synthesis of the emerging role of GenAI across the autonomous driving stack.<n>We begin by distilling the principles and trade-offs of modern generative modeling, encompassing VAEs, GANs, Diffusion Models, and Large Language Models.<n>We categorize practical applications, such as synthetic data generalization, end-to-end driving strategies, high-fidelity digital twin systems, smart transportation networks, and cross-domain transfer to embodied AI.
arXiv Detail & Related papers (2025-05-13T17:59:20Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.
Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - CRMArena: Understanding the Capacity of LLM Agents to Perform Professional CRM Tasks in Realistic Environments [90.29937153770835]
We introduce CRMArena, a benchmark designed to evaluate AI agents on realistic tasks grounded in professional work environments.
We show that state-of-the-art LLM agents succeed in less than 40% of the tasks with ReAct prompting, and less than 55% even with function-calling abilities.
Our findings highlight the need for enhanced agent capabilities in function-calling and rule-following to be deployed in real-world work environments.
arXiv Detail & Related papers (2024-11-04T17:30:51Z) - Minimum Viable Ethics: From Institutionalizing Industry AI Governance to Product Impact [0.0]
We find that AI ethics professionals are highly agile and opportunistic, as they attempt to create standardized and reusable processes and tools.
In negotiations with product teams, they face challenges rooted in their lack of authority and ownership over product, but can push forward ethics work by leveraging narratives of regulatory response and ethics as product quality assurance.
This strategy leaves us with a minimum viable ethics, a narrowly scoped industry AI ethics that is limited in its capacity to address normative issues separate from compliance or product quality.
arXiv Detail & Related papers (2024-09-11T00:52:22Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry [0.0]
We discuss challenges that any organization attempting to operationalize AI governance will have to face.
These include questions concerning how to define the material scope of AI governance.
We hope to provide project managers, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks with general best practices.
arXiv Detail & Related papers (2024-07-07T12:01:42Z) - Understanding the Building Blocks of Accountability in Software
Engineering [3.521765725717803]
We investigate the factors that foster software engineers' individual accountability within their teams.
Our findings recognize two primary forms of accountability shaping software engineers individual senses of accountability: institutionalized and grassroots.
arXiv Detail & Related papers (2024-02-02T21:53:35Z) - Red-Teaming for Generative AI: Silver Bullet or Security Theater? [42.35800543892003]
We argue that while red-teaming may be a valuable big-tent idea for characterizing GenAI harm mitigations, industry may effectively apply red-teaming and other strategies behind closed doors to safeguard AI.
To move toward a more robust toolbox of evaluations for generative AI, we synthesize our recommendations into a question bank meant to guide and scaffold future AI red-teaming practices.
arXiv Detail & Related papers (2024-01-29T05:46:14Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions [59.34177693293227]
We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
arXiv Detail & Related papers (2023-05-19T10:43:06Z) - Three lines of defense against risks from AI [0.0]
It is not always clear who is responsible for AI risk management.
The Three Lines of Defense (3LoD) model is considered best practice in many industries.
I suggest ways in which AI companies could implement the model.
arXiv Detail & Related papers (2022-12-16T09:33:00Z) - Dislocated Accountabilities in the AI Supply Chain: Modularity and
Developers' Notions of Responsibility [1.2691047660244335]
We use Suchman's "located accountability" to show how responsible artificial intelligence labor is currently organized.
We argue that current responsible artificial intelligence interventions, like ethics checklists, could be improved by taking a located accountability approach.
arXiv Detail & Related papers (2022-09-20T15:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.