POLARIS: A framework to guide the development of Trustworthy AI systems
- URL: http://arxiv.org/abs/2402.05340v1
- Date: Thu, 8 Feb 2024 01:05:16 GMT
- Title: POLARIS: A framework to guide the development of Trustworthy AI systems
- Authors: Maria Teresa Baldassarre, Domenico Gigante, Marcos Kalinowski, Azzurra
Ragone
- Abstract summary: There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
- Score: 3.02243271391691
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the ever-expanding landscape of Artificial Intelligence (AI), where
innovation thrives and new products and services are continuously being
delivered, ensuring that AI systems are designed and developed responsibly
throughout their entire lifecycle is crucial. To this end, several AI ethics
principles and guidelines have been issued to which AI systems should conform.
Nevertheless, relying solely on high-level AI ethics principles is far from
sufficient to ensure the responsible engineering of AI systems. In this field,
AI professionals often navigate by sight. Indeed, while recommendations
promoting Trustworthy AI (TAI) exist, these are often high-level statements
that are difficult to translate into concrete implementation strategies. There
is a significant gap between high-level AI ethics principles and low-level
concrete practices for AI professionals. To address this challenge, our work
presents an experience report where we develop a novel holistic framework for
Trustworthy AI - designed to bridge the gap between theory and practice - and
report insights from its application in an industrial case study. The framework
is built on the result of a systematic review of the state of the practice, a
survey, and think-aloud interviews with 34 AI practitioners. The framework,
unlike most of those already in the literature, is designed to provide
actionable guidelines and tools to support different types of stakeholders
throughout the entire Software Development Life Cycle (SDLC). Our goal is to
empower AI professionals to confidently navigate the ethical dimensions of TAI
through practical insights, ensuring that the vast potential of AI is exploited
responsibly for the benefit of society as a whole.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - RE-centric Recommendations for the Development of Trustworthy(er)
Autonomous Systems [4.268504966623082]
Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU.
practitioners lack actionable instructions to operationalise ethics during AI systems development.
A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them.
arXiv Detail & Related papers (2023-05-29T11:57:07Z) - Responsible AI Pattern Catalogue: A Collection of Best Practices for AI
Governance and Engineering [20.644494592443245]
We present a Responsible AI Pattern Catalogue based on the results of a Multivocal Literature Review (MLR)
Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle.
arXiv Detail & Related papers (2022-09-12T00:09:08Z) - Software Engineering for Responsible AI: An Empirical Study and
Operationalised Patterns [20.747681252352464]
We propose a template that enables AI ethics principles to be operationalised in the form of concrete patterns.
These patterns provide concrete, operationalised guidance that facilitate the development of responsible AI systems.
arXiv Detail & Related papers (2021-11-18T02:18:27Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.