Towards Implementing Responsible AI
- URL: http://arxiv.org/abs/2205.04358v5
- Date: Wed, 26 Apr 2023 02:42:44 GMT
- Title: Towards Implementing Responsible AI
- Authors: Conrad Sanderson, Qinghua Lu, David Douglas, Xiwei Xu, Liming Zhu, Jon
Whittle
- Abstract summary: We propose four aspects of AI system design and development, adapting processes used in software engineering.
The salient findings cover four aspects of AI system design and development, adapting processes used in software engineering.
- Score: 22.514717870367623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the deployment of artificial intelligence (AI) is changing many fields and
industries, there are concerns about AI systems making decisions and
recommendations without adequately considering various ethical aspects, such as
accountability, reliability, transparency, explainability, contestability,
privacy, and fairness. While many sets of AI ethics principles have been
recently proposed that acknowledge these concerns, such principles are
high-level and do not provide tangible advice on how to develop ethical and
responsible AI systems. To gain insight on the possible implementation of the
principles, we conducted an empirical investigation involving semi-structured
interviews with a cohort of AI practitioners. The salient findings cover four
aspects of AI system design and development, adapting processes used in
software engineering: (i) high-level view, (ii) requirements engineering, (iii)
design and implementation, (iv) deployment and operation.
Related papers
- An Empirical Study on Decision-Making Aspects in Responsible Software Engineering for AI [5.564793925574796]
This study investigates the ethical challenges and complexities inherent in responsible software engineering (RSE) for AI.
Personal values, emerging roles, and awareness of AIs societal impact influence responsible decision-making in RSE for AI.
arXiv Detail & Related papers (2025-01-26T22:38:04Z) - Responsible AI in the Software Industry: A Practitioner-Centered Perspective [0.0]
This study explores the practices and challenges faced by software practitioners in aligning with Responsible AI principles.
Our findings reveal that while practitioners frequently address fairness, inclusiveness, and reliability, principles such as transparency and accountability receive comparatively less attention in their practices.
arXiv Detail & Related papers (2024-12-10T15:57:13Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - AI Ethics Principles in Practice: Perspectives of Designers and Developers [19.16435145144916]
We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO)
Interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government.
arXiv Detail & Related papers (2021-12-14T15:28:45Z) - Software Engineering for Responsible AI: An Empirical Study and
Operationalised Patterns [20.747681252352464]
We propose a template that enables AI ethics principles to be operationalised in the form of concrete patterns.
These patterns provide concrete, operationalised guidance that facilitate the development of responsible AI systems.
arXiv Detail & Related papers (2021-11-18T02:18:27Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.