Responsible AI in the Global Context: Maturity Model and Survey
- URL: http://arxiv.org/abs/2410.09985v1
- Date: Sun, 13 Oct 2024 20:04:32 GMT
- Title: Responsible AI in the Global Context: Maturity Model and Survey
- Authors: Anka Reuel, Patrick Connolly, Kiana Jafari Meimandi, Shekhar Tewari, Jakub Wiatrak, Dikshita Venkatesh, Mykel Kochenderfer,
- Abstract summary: Responsible AI (RAI) has emerged as a major focus across industry, policymaking, and academia.
This study explores the global state of RAI through one of the most extensive surveys to date on the topic.
We define a conceptual RAI maturity model for organizations to map how well they implement organizational and operational RAI measures.
- Score: 0.3613661942047476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Responsible AI (RAI) has emerged as a major focus across industry, policymaking, and academia, aiming to mitigate the risks and maximize the benefits of AI, both on an organizational and societal level. This study explores the global state of RAI through one of the most extensive surveys to date on the topic, surveying 1000 organizations across 20 industries and 19 geographical regions. We define a conceptual RAI maturity model for organizations to map how well they implement organizational and operational RAI measures. Based on this model, the survey assesses the adoption of system-level measures to mitigate identified risks related to, for example, discrimination, reliability, or privacy, and also covers key organizational processes pertaining to governance, risk management, and monitoring and control. The study highlights the expanding AI risk landscape, emphasizing the need for comprehensive risk mitigation strategies. The findings also reveal significant strides towards RAI maturity, but we also identify gaps in RAI implementation that could lead to increased (public) risks from AI systems. This research offers a structured approach to assess and improve RAI practices globally and underscores the critical need for bridging the gap between RAI planning and execution to ensure AI advancement aligns with human welfare and societal benefits.
Related papers
- Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Trustworthiness in Retrieval-Augmented Generation Systems: A Survey [59.26328612791924]
Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the development of Large Language Models (LLMs)
We propose a unified framework that assesses the trustworthiness of RAG systems across six key dimensions: factuality, robustness, fairness, transparency, accountability, and privacy.
arXiv Detail & Related papers (2024-09-16T09:06:44Z) - Achieving Responsible AI through ESG: Insights and Recommendations from Industry Engagement [15.544366555353262]
This study examines how leading companies align Responsible AI (RAI) with their Environmental, Social, and Governance (ESG) goals.
We identify a strong link between RAI and ESG practices, but a significant gap exists between internal RAI policies and public disclosures.
We provide recommendations to strengthen RAI strategies, focusing on transparency, cross-functional collaboration, and seamless integration into existing ESG frameworks.
arXiv Detail & Related papers (2024-08-30T05:48:03Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment [17.026921603767722]
The study introduces our Responsible AI (RAI) Question Bank, a comprehensive framework and tool designed to support diverse AI initiatives.
By integrating AI ethics principles such as fairness, transparency, and accountability into a structured question format, the RAI Question Bank aids in identifying potential risks.
arXiv Detail & Related papers (2024-08-02T22:40:20Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - Approaching Emergent Risks: An Exploratory Study into Artificial Intelligence Risk Management within Financial Organisations [0.0]
This study aims to contribute to the understanding of AI risk management in organisations through an exploratory empirical investigation into these practices.
In-depth insights are gained through interviews with nine practitioners from different organisations within the UK financial sector.
The findings of this study unearth levels of risk management framework readiness and prevailing approaches to risk management at both a processual and organisational level.
arXiv Detail & Related papers (2024-04-08T20:28:22Z) - Application of the NIST AI Risk Management Framework to Surveillance Technology [1.5442389863546546]
This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF)
Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector.
arXiv Detail & Related papers (2024-03-22T23:07:11Z) - Investigating Algorithm Review Boards for Organizational Responsible
Artificial Intelligence Governance [0.16385815610837165]
We interviewed 17 technical contributors across organization types about their experiences with internal RAI governance.
We summarized the first detailed findings on algorithm review boards (ARBs) and similar review committees in practice.
Our results suggest that integration with existing internal regulatory approaches and leadership buy-in are among the most important attributes for success.
arXiv Detail & Related papers (2024-01-23T20:53:53Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.