Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework
- URL: http://arxiv.org/abs/2408.00965v2
- Date: Tue, 6 Aug 2024 00:12:50 GMT
- Title: Integrating ESG and AI: A Comprehensive Responsible AI Assessment Framework
- Authors: Sung Une Lee, Harsha Perera, Yue Liu, Boming Xia, Qinghua Lu, Liming Zhu, Jessica Cairns, Moana Nottage,
- Abstract summary: ESG-AI framework was developed based on insights from engagements with 28 companies.
It provides an overview of the environmental and social impacts of AI applications, helping users such as investors assess the materiality of AI use.
It enables investors to evaluate a company's commitment to responsible AI through structured engagements and thorough assessment of specific risk areas.
- Score: 15.544366555353262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is a widely developed and adopted technology across entire industry sectors. Integrating environmental, social, and governance (ESG) considerations with AI investments is crucial for ensuring ethical and sustainable technological advancement. Particularly from an investor perspective, this integration not only mitigates risks but also enhances long-term value creation by aligning AI initiatives with broader societal goals. Yet, this area has been less explored in both academia and industry. To bridge the gap, we introduce a novel ESG-AI framework, which is developed based on insights from engagements with 28 companies and comprises three key components. The framework provides a structured approach to this integration, developed in collaboration with industry practitioners. The ESG-AI framework provides an overview of the environmental and social impacts of AI applications, helping users such as investors assess the materiality of AI use. Moreover, it enables investors to evaluate a company's commitment to responsible AI through structured engagements and thorough assessment of specific risk areas. We have publicly released the framework and toolkit in April 2024, which has received significant attention and positive feedback from the investment community. This paper details each component of the framework, demonstrating its applicability in real-world contexts and its potential to guide ethical AI investments.
Related papers
- Achieving Responsible AI through ESG: Insights and Recommendations from Industry Engagement [15.544366555353262]
This study examines how leading companies align Responsible AI (RAI) with their Environmental, Social, and Governance (ESG) goals.
We identify a strong link between RAI and ESG practices, but a significant gap exists between internal RAI policies and public disclosures.
We provide recommendations to strengthen RAI strategies, focusing on transparency, cross-functional collaboration, and seamless integration into existing ESG frameworks.
arXiv Detail & Related papers (2024-08-30T05:48:03Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - AI in ESG for Financial Institutions: An Industrial Survey [4.893954917947095]
The paper surveys the industrial landscape to delineate the necessity and impact of AI in bolstering ESG frameworks.
Our survey categorizes AI applications across three main pillars of ESG, illustrating how AI enhances analytical capabilities, risk assessment, customer engagement, reporting accuracy and more.
The paper also addresses the imperative of responsible and sustainable AI, emphasizing the ethical dimensions of AI deployment in ESG-related banking processes.
arXiv Detail & Related papers (2024-02-03T02:14:47Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Towards Artificial General Intelligence (AGI) in the Internet of Things
(IoT): Opportunities and Challenges [55.82853124625841]
Artificial General Intelligence (AGI) possesses the capacity to comprehend, learn, and execute tasks with human cognitive abilities.
This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the Internet of Things.
The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education.
arXiv Detail & Related papers (2023-09-14T05:43:36Z) - The Equitable AI Research Roundtable (EARR): Towards Community-Based
Decision Making in Responsible AI Development [4.1986677342209004]
The paper reports on our initial evaluation of The Equitable AI Research Roundtable.
EARR was created in collaboration among a large tech firm, nonprofits, NGO research institutions, and universities.
We outline three principles in practice of how EARR has operated thus far that are especially relevant to the concerns of the FAccT community.
arXiv Detail & Related papers (2023-03-14T18:57:20Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.