AI Sustainability in Practice Part Two: Sustainability Throughout the AI Workflow
- URL: http://arxiv.org/abs/2403.15404v1
- Date: Mon, 19 Feb 2024 22:58:05 GMT
- Title: AI Sustainability in Practice Part Two: Sustainability Throughout the AI Workflow
- Authors: David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer, Janis Wong, Ismael Kherroubi Garcia,
- Abstract summary: This workbook is part two of two workbooks on AI Sustainability.
It provides a template of the SIA and activities that allow a deeper dive into crucial parts of it.
It discusses methods for weighing values and considering trade-offs during the SIA.
- Score: 0.46671368497079174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The sustainability of AI systems depends on the capacity of project teams to proceed with a continuous sensitivity to their potential real-world impacts and transformative effects. Stakeholder Impact Assessments (SIAs) are governance mechanisms that enable this kind of responsiveness. They are tools that create a procedure for, and a means of documenting, the collaborative evaluation and reflective anticipation of the possible harms and benefits of AI innovation projects. SIAs are not one-off governance actions. They require project teams to pay continuous attention to the dynamic and changing character of AI production and use and to the shifting conditions of the real-world environments in which AI technologies are embedded. This workbook is part two of two workbooks on AI Sustainability. It provides a template of the SIA and activities that allow a deeper dive into crucial parts of it. It discusses methods for weighing values and considering trade-offs during the SIA. And, it highlights the need to treat the SIA as an end-to-end process of responsive evaluation and re-assessment.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - AI Sustainability in Practice Part One: Foundations for Sustainable AI Projects [0.46671368497079174]
AI projects are responsive to the transformative effects as well as short-, medium-, and long-term impacts on individuals and society.
This workbook is the first part of a pair that provides the concepts and tools needed to put AI Sustainability into practice.
arXiv Detail & Related papers (2024-02-19T22:52:14Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Artificial Intelligence in Sustainable Vertical Farming [0.0]
The paper provides a comprehensive exploration of the role of AI in sustainable vertical farming.
The review synthesizes the current state of AI applications, encompassing machine learning, computer vision, the Internet of Things (IoT), and robotics.
The implications extend beyond efficiency gains, considering economic viability, reduced environmental impact, and increased food security.
arXiv Detail & Related papers (2023-11-17T22:15:41Z) - Applications and Societal Implications of Artificial Intelligence in
Manufacturing: A Systematic Review [0.3867363075280544]
The study finds that there is a predominantly optimistic outlook in prior literature regarding AI's impact on firms.
The paper draws analogies to historical cases and other examples to provide a contextual perspective on potential societal effects of industrial AI.
arXiv Detail & Related papers (2023-07-25T07:17:37Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Let it RAIN for Social Good [7.315761817405695]
Responsible Norms (RAIN) framework is presented to bridge the abstraction gap between high-level values and responsible action.
With effective and operationalized AI Ethics, AI technologies can be directed towards global sustainable development.
arXiv Detail & Related papers (2022-07-26T13:37:13Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Where Responsible AI meets Reality: Practitioner Perspectives on
Enablers for shifting Organizational Practices [3.119859292303396]
This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice.
We present the results of semi-structured qualitative interviews with practitioners working in industry, investigating common challenges, ethical tensions, and effective enablers for responsible AI initiatives.
arXiv Detail & Related papers (2020-06-22T15:57:30Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.