Achieving Responsible AI through ESG: Insights and Recommendations from Industry Engagement
- URL: http://arxiv.org/abs/2409.10520v1
- Date: Fri, 30 Aug 2024 05:48:03 GMT
- Title: Achieving Responsible AI through ESG: Insights and Recommendations from Industry Engagement
- Authors: Harsha Perera, Sung Une Lee, Yue Liu, Boming Xia, Qinghua Lu, Liming Zhu, Jessica Cairns, Moana Nottage,
- Abstract summary: This study examines how leading companies align Responsible AI (RAI) with their Environmental, Social, and Governance (ESG) goals.
We identify a strong link between RAI and ESG practices, but a significant gap exists between internal RAI policies and public disclosures.
We provide recommendations to strengthen RAI strategies, focusing on transparency, cross-functional collaboration, and seamless integration into existing ESG frameworks.
- Score: 15.544366555353262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Artificial Intelligence (AI) becomes integral to business operations, integrating Responsible AI (RAI) within Environmental, Social, and Governance (ESG) frameworks is essential for ethical and sustainable AI deployment. This study examines how leading companies align RAI with their ESG goals. Through interviews with 28 industry leaders, we identified a strong link between RAI and ESG practices. However, a significant gap exists between internal RAI policies and public disclosures, highlighting the need for greater board-level expertise, robust governance, and employee engagement. We provide key recommendations to strengthen RAI strategies, focusing on transparency, cross-functional collaboration, and seamless integration into existing ESG frameworks.
Related papers
- Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey [92.36487127683053]
Retrieval-Augmented Generation (RAG) is an advanced technique designed to address the challenges of Artificial Intelligence-Generated Content (AIGC)
RAG provides reliable and up-to-date external knowledge, reduces hallucinations, and ensures relevant context across a wide range of tasks.
Despite RAG's success and potential, recent studies have shown that the RAG paradigm also introduces new risks, including privacy concerns, adversarial attacks, and accountability issues.
arXiv Detail & Related papers (2025-02-08T06:50:47Z) - Responsible AI in the Global Context: Maturity Model and Survey [0.3613661942047476]
Responsible AI (RAI) has emerged as a major focus across industry, policymaking, and academia.
This study explores the global state of RAI through one of the most extensive surveys to date on the topic.
We define a conceptual RAI maturity model for organizations to map how well they implement organizational and operational RAI measures.
arXiv Detail & Related papers (2024-10-13T20:04:32Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework [15.544366555353262]
ESG-AI framework was developed based on insights from engagements with 28 companies.
It provides an overview of the environmental and social impacts of AI applications, helping users such as investors assess the materiality of AI use.
It enables investors to evaluate a company's commitment to responsible AI through structured engagements and thorough assessment of specific risk areas.
arXiv Detail & Related papers (2024-08-02T00:58:01Z) - Using Case Studies to Teach Responsible AI to Industry Practitioners [8.152080071643685]
We present a stakeholder-first educational approach using interactive case studies to foster organizational and practitioner-level engagement.
We detail our partnership with Meta, a global technology company, to co-develop and deliver RAI workshops to a diverse company audience.
arXiv Detail & Related papers (2024-07-19T22:06:06Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - AI in ESG for Financial Institutions: An Industrial Survey [4.893954917947095]
The paper surveys the industrial landscape to delineate the necessity and impact of AI in bolstering ESG frameworks.
Our survey categorizes AI applications across three main pillars of ESG, illustrating how AI enhances analytical capabilities, risk assessment, customer engagement, reporting accuracy and more.
The paper also addresses the imperative of responsible and sustainable AI, emphasizing the ethical dimensions of AI deployment in ESG-related banking processes.
arXiv Detail & Related papers (2024-02-03T02:14:47Z) - Investigating Algorithm Review Boards for Organizational Responsible
Artificial Intelligence Governance [0.16385815610837165]
We interviewed 17 technical contributors across organization types about their experiences with internal RAI governance.
We summarized the first detailed findings on algorithm review boards (ARBs) and similar review committees in practice.
Our results suggest that integration with existing internal regulatory approaches and leadership buy-in are among the most important attributes for success.
arXiv Detail & Related papers (2024-01-23T20:53:53Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [64.59151650272477]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.
This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.