A Rapid Review of Responsible AI frameworks: How to guide the
development of ethical AI
- URL: http://arxiv.org/abs/2306.05003v1
- Date: Thu, 8 Jun 2023 07:47:18 GMT
- Title: A Rapid Review of Responsible AI frameworks: How to guide the
development of ethical AI
- Authors: Vita Santa Barletta, Danilo Caivano, Domenico Gigante and Azzurra
Ragone
- Abstract summary: We conduct a rapid review of several frameworks providing principles, guidelines, and/or tools to help practitioners in the development and deployment of Responsible AI (RAI) applications.
Our results reveal that there is not a "catching-all" framework supporting both technical and non-technical stakeholders in the implementation of real-world projects.
- Score: 1.3734044451150018
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the last years, the raise of Artificial Intelligence (AI), and its
pervasiveness in our lives, has sparked a flourishing debate about the ethical
principles that should lead its implementation and use in society. Driven by
these concerns, we conduct a rapid review of several frameworks providing
principles, guidelines, and/or tools to help practitioners in the development
and deployment of Responsible AI (RAI) applications. We map each framework
w.r.t. the different Software Development Life Cycle (SDLC) phases discovering
that most of these frameworks fall just in the Requirements Elicitation phase,
leaving the other phases uncovered. Very few of these frameworks offer
supporting tools for practitioners, and they are mainly provided by private
companies. Our results reveal that there is not a "catching-all" framework
supporting both technical and non-technical stakeholders in the implementation
of real-world projects. Our findings highlight the lack of a comprehensive
framework encompassing all RAI principles and all (SDLC) phases that could be
navigated by users with different skill sets and with different goals.
Related papers
- Framework, Standards, Applications and Best practices of Responsible AI : A Comprehensive Survey [20.554868638297688]
RAI is a combination of ethics associated with the usage of artificial intelligence aligned with the common and standard frameworks.
Currently, ethical standards and implementation of RAI are decoupled which caters each industry to follow their own standards to use AI ethically.
Social pressure and unethical way of using AI forces the RAI design rather than implementation.
arXiv Detail & Related papers (2025-04-18T03:23:52Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.
AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.
We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future [7.976680307696195]
Responsible Artificial Intelligence (RAI) has emerged as a crucial framework for addressing ethical concerns in the development and deployment of AI systems.
This article examines the challenges and opportunities in implementing ethical, transparent, and accountable AI systems in the post-ChatGPT era.
arXiv Detail & Related papers (2025-01-15T20:59:42Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness [110.6921470281479]
We introduce INDICT: a new framework that empowers large language models with Internal Dialogues of Critiques for both safety and helpfulness guidance.
The internal dialogue is a dual cooperative system between a safety-driven critic and a helpfulness-driven critic.
We observed that our approach can provide an advanced level of critiques of both safety and helpfulness analysis, significantly improving the quality of output codes.
arXiv Detail & Related papers (2024-06-23T15:55:07Z) - Trustworthy AI in practice: an analysis of practitioners' needs and challenges [2.5788518098820337]
A plethora of frameworks and guidelines have appeared to support practitioners in implementing Trustworthy AI applications.
We study the vision AI practitioners have on TAI principles, how they address them, and what they would like to have.
We highlight recommendations to help AI practitioners develop Trustworthy AI applications.
arXiv Detail & Related papers (2024-05-15T13:02:46Z) - Crossing the principle-practice gap in AI ethics with ethical problem-solving [0.0]
How to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem.
EPS is a methodology promoting responsible, human-centric, and value-oriented AI development.
We utilize EPS as a blueprint to propose the implementation of Ethics as a Service Platform.
arXiv Detail & Related papers (2024-04-16T14:35:13Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Responsible AI Implementation: A Human-centered Framework for
Accelerating the Innovation Process [0.8481798330936974]
This paper proposes a theoretical framework for responsible artificial intelligence (AI) implementation.
The proposed framework emphasizes a synergistic business technology approach for the agile co-creation process.
The framework emphasizes establishing and maintaining trust throughout the human-centered design and agile development of AI.
arXiv Detail & Related papers (2022-09-15T06:24:01Z) - On the Current and Emerging Challenges of Developing Fair and Ethical AI
Solutions in Financial Services [1.911678487931003]
We show how practical considerations reveal the gaps between high-level principles and concrete, deployed AI applications.
We show how practical considerations reveal the gaps between high-level principles and concrete, deployed AI applications.
arXiv Detail & Related papers (2021-11-02T00:15:04Z) - Empowered and Embedded: Ethics and Agile Processes [60.63670249088117]
We argue that ethical considerations need to be embedded into the (agile) software development process.
We put emphasis on the possibility to implement ethical deliberations in already existing and well established agile software development processes.
arXiv Detail & Related papers (2021-07-15T11:14:03Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.