MQG4AI Towards Responsible High-risk AI - Illustrated for Transparency Focusing on Explainability Techniques
- URL: http://arxiv.org/abs/2502.11889v1
- Date: Mon, 17 Feb 2025 15:14:52 GMT
- Title: MQG4AI Towards Responsible High-risk AI - Illustrated for Transparency Focusing on Explainability Techniques
- Authors: Miriam Elia, Alba Maria Lopez, Katherin Alexandra Corredor, Bernhard Bauer, Esteban Garcia-Cuesta,
- Abstract summary: We propose an approach for AI lifecycle planning that bridges the gap between generic guidelines and use case-specific requirements.
Our work aims to contribute to the development of practical tools for implementing Responsible AI (RAI)
- Score: 1.1105279729898387
- License:
- Abstract: As artificial intelligence (AI) systems become increasingly integrated into critical domains, ensuring their responsible design and continuous development is imperative. Effective AI quality management (QM) requires tools and methodologies that address the complexities of the AI lifecycle. In this paper, we propose an approach for AI lifecycle planning that bridges the gap between generic guidelines and use case-specific requirements (MQG4AI). Our work aims to contribute to the development of practical tools for implementing Responsible AI (RAI) by aligning lifecycle planning with technical, ethical and regulatory demands. Central to our approach is the introduction of a flexible and customizable Methodology based on Quality Gates, whose building blocks incorporate RAI knowledge through information linking along the AI lifecycle in a continuous manner, addressing AIs evolutionary character. For our present contribution, we put a particular emphasis on the Explanation stage during model development, and illustrate how to align a guideline to evaluate the quality of explanations with MQG4AI, contributing to overall Transparency.
Related papers
- AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development [0.0]
We propose a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior.
Our approach accommodates ethical pluralism, offering a flexible and adaptable solution for the evolving landscape of AI governance.
arXiv Detail & Related papers (2024-11-05T18:38:30Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - A Vision for Operationalising Diversity and Inclusion in AI [5.4897262701261225]
This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems.
A significant challenge in AI development is the effective operationalization of D&I principles.
This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI)
arXiv Detail & Related papers (2023-12-11T02:44:39Z) - Towards a Responsible AI Metrics Catalogue: A Collection of Metrics for
AI Accountability [28.67753149592534]
This study bridges the accountability gap by introducing our effort towards a comprehensive metrics catalogue.
Our catalogue delineates process metrics that underpin procedural integrity, resource metrics that provide necessary tools and frameworks, and product metrics that reflect the outputs of AI systems.
arXiv Detail & Related papers (2023-11-22T04:43:16Z) - Responsible Design Patterns for Machine Learning Pipelines [10.184056098238765]
AI ethics involves applying ethical principles to the entire life cycle of AI systems.
This is essential to mitigate potential risks and harms associated with AI, such as biases.
To achieve this goal, responsible design patterns (RDPs) are critical for Machine Learning (ML) pipelines.
arXiv Detail & Related papers (2023-05-31T15:47:12Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Designing an AI-Driven Talent Intelligence Solution: Exploring Big Data
to extend the TOE Framework [0.0]
This study aims to identify the new requirements for developing AI-oriented artifacts to address talent management issues.
A design science method is adopted for conducting the experimental study with structured machine learning techniques.
arXiv Detail & Related papers (2022-07-25T10:42:50Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.