Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects
- URL: http://arxiv.org/abs/2304.08275v4
- Date: Fri, 6 Sep 2024 04:17:47 GMT
- Title: Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects
- Authors: Conrad Sanderson, David Douglas, Qinghua Lu,
- Abstract summary: We focus on two-sided interactions, drawing on support spread across a diverse literature.
This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles.
- Score: 21.133468554780404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Crossing the principle-practice gap in AI ethics with ethical problem-solving [0.0]
How to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem.
EPS is a methodology promoting responsible, human-centric, and value-oriented AI development.
We utilize EPS as a blueprint to propose the implementation of Ethics as a Service Platform.
arXiv Detail & Related papers (2024-04-16T14:35:13Z) - Resolving Ethics Trade-offs in Implementing Responsible AI [18.894725256708128]
We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex.
None of the approaches is likely to be appropriate for all organisations, systems, or applications.
We propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions.
arXiv Detail & Related papers (2024-01-16T04:14:23Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Survey on AI Ethics: A Socio-technical Perspective [0.9374652839580183]
Ethical concerns associated with AI are multifaceted, including challenging issues of fairness, privacy and data protection, responsibility and accountability, safety and robustness, transparency and explainability, and environmental impact.
This work unifies the current and future ethical concerns of deploying AI into society.
arXiv Detail & Related papers (2023-11-28T21:00:56Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - The Different Faces of AI Ethics Across the World: A
Principle-Implementation Gap Analysis [12.031113181911627]
Artificial Intelligence (AI) is transforming our daily life with several applications in healthcare, space exploration, banking and finance.
These rapid progresses in AI have brought increasing attention to the potential impacts of AI technologies on society.
Several ethical principles have been released by governments, national and international organisations.
These principles outline high-level precepts to guide the ethical development, deployment, and governance of AI.
arXiv Detail & Related papers (2022-05-12T22:41:08Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Ethics of AI: A Systematic Literature Review of Principles and
Challenges [3.7129018407842445]
Transparency, privacy, accountability and fairness are identified as the most common AI ethics principles.
Lack of ethical knowledge and vague principles are reported as the significant challenges for considering ethics in AI.
arXiv Detail & Related papers (2021-09-12T15:33:43Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.