Operationalising Responsible AI Using a Pattern-Oriented Approach: A
Case Study on Chatbots in Financial Services
- URL: http://arxiv.org/abs/2301.05517v1
- Date: Tue, 3 Jan 2023 23:11:03 GMT
- Title: Operationalising Responsible AI Using a Pattern-Oriented Approach: A
Case Study on Chatbots in Financial Services
- Authors: Qinghua Lu, Yuxiu Luo, Liming Zhu, Mingjian Tang, Xiwei Xu, Jon
Whittle
- Abstract summary: Responsible AI is the practice of developing and using AI systems in a way that benefits the humans, society, and environment.
Various responsible AI principles have been released recently, but those principles are very abstract and not practical enough.
To bridge the gap, we adopt a pattern-oriented approach and build a responsible AI pattern catalogue.
- Score: 11.33499498841489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Responsible AI is the practice of developing and using AI systems in a way
that benefits the humans, society, and environment, while minimising the risk
of negative consequences. Various responsible AI principles have been released
recently. However, those principles are very abstract and not practical enough.
Further, significant efforts have been put on algorithm-level solutions which
are usually confined to a narrow set of principles (such as fairness and
privacy). To bridge the gap, we adopt a pattern-oriented approach and build a
responsible AI pattern catalogue for operationalising responsible AI from a
system perspective. In this article, we first summarise the major challenges in
operationalising responsible AI at scale and introduce how we use responsible
AI pattern catalogue to address those challenges. Then, we discuss the case
study we have conducted using the chatbot development use case to evaluate the
usefulness of the pattern catalogue.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Responsible AI Pattern Catalogue: A Collection of Best Practices for AI
Governance and Engineering [20.644494592443245]
We present a Responsible AI Pattern Catalogue based on the results of a Multivocal Literature Review (MLR)
Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle.
arXiv Detail & Related papers (2022-09-12T00:09:08Z) - Towards a Roadmap on Software Engineering for Responsible AI [17.46300715928443]
This paper aims to develop a roadmap on software engineering for responsible AI.
The roadmap focuses on (i) establishing multi-level governance for responsible AI systems, (ii) setting up the development processes incorporating process-oriented practices for responsible AI systems, and (iii) building responsible-AI-by-design into AI systems through system-level architectural style, patterns and techniques.
arXiv Detail & Related papers (2022-03-09T07:01:32Z) - Responsible-AI-by-Design: a Pattern Collection for Designing Responsible
AI Systems [12.825892132103236]
Many ethical regulations, principles, and guidelines for responsible AI have been issued recently.
This paper identifies one missing element as the system-level guidance: how to design the architecture of responsible AI systems.
We present a summary of design patterns that can be embedded into the AI systems as product features to contribute to responsible-AI-by-design.
arXiv Detail & Related papers (2022-03-02T07:30:03Z) - Software Engineering for Responsible AI: An Empirical Study and
Operationalised Patterns [20.747681252352464]
We propose a template that enables AI ethics principles to be operationalised in the form of concrete patterns.
These patterns provide concrete, operationalised guidance that facilitate the development of responsible AI systems.
arXiv Detail & Related papers (2021-11-18T02:18:27Z) - ECCOLA -- a Method for Implementing Ethically Aligned AI Systems [11.31664099885664]
We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
arXiv Detail & Related papers (2020-04-17T17:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.