On Fairness and Interpretability
- URL: http://arxiv.org/abs/2106.13271v1
- Date: Thu, 24 Jun 2021 18:48:46 GMT
- Title: On Fairness and Interpretability
- Authors: Deepak P, Sanil V, Joemon M. Jose
- Abstract summary: We discuss and elucidate the differences between fairness and interpretability across a variety of dimensions.
We develop two principles-based frameworks towards developing ethical AI for the future.
- Score: 8.732874144276352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ethical AI spans a gamut of considerations. Among these, the most popular
ones, fairness and interpretability, have remained largely distinct in
technical pursuits. We discuss and elucidate the differences between fairness
and interpretability across a variety of dimensions. Further, we develop two
principles-based frameworks towards developing ethical AI for the future that
embrace aspects of both fairness and interpretability. First, interpretability
for fairness proposes instantiating interpretability within the realm of
fairness to develop a new breed of ethical AI. Second, fairness and
interpretability initiates deliberations on bringing the best aspects of both
together. We hope that these two frameworks will contribute to intensifying
scholarly discussions on new frontiers of ethical AI that brings together
fairness and interpretability.
Related papers
- Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment [35.63053777817013]
Friendly AI (FAI) has been proposed to advocate for more equitable and fair development of AI.
This paper provides a thorough review of FAI, focusing on theoretical perspectives both for and against its development.
Key applications are discussed from the perspectives of XAI, privacy, fairness and affective computing.
arXiv Detail & Related papers (2024-12-19T17:56:08Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - (Unfair) Norms in Fairness Research: A Meta-Analysis [6.395584220342517]
We conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics.
Our investigation reveals two concerning trends: first, a US-centric perspective dominates throughout fairness research.
Second, fairness studies exhibit a widespread reliance on binary codifications of human identity.
arXiv Detail & Related papers (2024-06-17T17:14:47Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs [0.0]
Two prominent trade-offs in artificial intelligence are between predictive accuracy and fairness, and between predictive accuracy and interpretability.
prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values.
I introduce a sociotechnical approach to examining the value implications of trade-offs.
arXiv Detail & Related papers (2024-03-07T05:03:18Z) - Beneficent Intelligence: A Capability Approach to Modeling Benefit,
Assistance, and Associated Moral Failures through AI Systems [12.239090962956043]
The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals.
We present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders.
arXiv Detail & Related papers (2023-08-01T22:38:14Z) - Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects [21.133468554780404]
We focus on two-sided interactions, drawing on support spread across a diverse literature.
This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles.
arXiv Detail & Related papers (2023-04-17T13:43:13Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.