Macro Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions
- URL: http://arxiv.org/abs/2208.12616v4
- Date: Tue, 10 Sep 2024 19:44:36 GMT
- Title: Macro Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions
- Authors: Jessica Woodgate, Nirav Ajmeri,
- Abstract summary: We develop a taxonomy of 21 normative ethical principles which can be operationalised in AI.
We envision this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
- Score: 1.864621482724548
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
Related papers
- Quelle {é}thique pour quelle IA ? [0.0]
This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI.
The author introduces to the contemporary need for and meaning of ethics, distinguishes it from other registers of normativities and underlines its inadequacy to formalization.
The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.
arXiv Detail & Related papers (2024-05-21T08:13:02Z) - Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness
Metrics [4.373803477995854]
Deontological ethics, specifically understood through Immanuel Kant, provides a moral framework that emphasizes the importance of duties and principles.
This paper explores the compatibility of a Kantian deontological framework in fairness metrics, part of the AI alignment field.
arXiv Detail & Related papers (2023-11-09T09:16:02Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - The Different Faces of AI Ethics Across the World: A
Principle-Implementation Gap Analysis [12.031113181911627]
Artificial Intelligence (AI) is transforming our daily life with several applications in healthcare, space exploration, banking and finance.
These rapid progresses in AI have brought increasing attention to the potential impacts of AI technologies on society.
Several ethical principles have been released by governments, national and international organisations.
These principles outline high-level precepts to guide the ethical development, deployment, and governance of AI.
arXiv Detail & Related papers (2022-05-12T22:41:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Ethics of AI: A Systematic Literature Review of Principles and
Challenges [3.7129018407842445]
Transparency, privacy, accountability and fairness are identified as the most common AI ethics principles.
Lack of ethical knowledge and vague principles are reported as the significant challenges for considering ethics in AI.
arXiv Detail & Related papers (2021-09-12T15:33:43Z) - AI virtues -- The missing link in putting AI ethics into practice [0.0]
The paper defines four basic AI virtues, namely justice, honesty, responsibility and care.
It defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues.
arXiv Detail & Related papers (2020-11-25T14:14:47Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.