Monotonicity for AI ethics and society: An empirical study of the
monotonic neural additive model in criminology, education, health care, and
finance
- URL: http://arxiv.org/abs/2301.07060v1
- Date: Tue, 17 Jan 2023 18:21:31 GMT
- Title: Monotonicity for AI ethics and society: An empirical study of the
monotonic neural additive model in criminology, education, health care, and
finance
- Authors: Dangxing Chen and Luyao Zhang
- Abstract summary: monotonicity axioms are essential in all areas of fairness, including criminology, education, health care, and finance.
Our research contributes to the interdisciplinary research at the interface of AI ethics, explainable AI (XAI), and human-computer interactions (HCIs)
- Score: 0.8566457170664925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithm fairness in the application of artificial intelligence (AI) is
essential for a better society. As the foundational axiom of social mechanisms,
fairness consists of multiple facets. Although the machine learning (ML)
community has focused on intersectionality as a matter of statistical parity,
especially in discrimination issues, an emerging body of literature addresses
another facet -- monotonicity. Based on domain expertise, monotonicity plays a
vital role in numerous fairness-related areas, where violations could misguide
human decisions and lead to disastrous consequences. In this paper, we first
systematically evaluate the significance of applying monotonic neural additive
models (MNAMs), which use a fairness-aware ML algorithm to enforce both
individual and pairwise monotonicity principles, for the fairness of AI ethics
and society. We have found, through a hybrid method of theoretical reasoning,
simulation, and extensive empirical analysis, that considering monotonicity
axioms is essential in all areas of fairness, including criminology, education,
health care, and finance. Our research contributes to the interdisciplinary
research at the interface of AI ethics, explainable AI (XAI), and
human-computer interactions (HCIs). By evidencing the catastrophic consequences
if monotonicity is not met, we address the significance of monotonicity
requirements in AI applications. Furthermore, we demonstrate that MNAMs are an
effective fairness-aware ML approach by imposing monotonicity restrictions
integrating human intelligence.
Related papers
- Secondary Bounded Rationality: A Theory of How Algorithms Reproduce Structural Inequality in AI Hiring [0.174048653626208]
Article argues that AI systems inherit and amplify human cognitive and structural biases through technical and sociopolitical constraints.<n>We show how algorithmic processes transform historical inequalities, such as elite credential privileging and network homophily, into ostensibly meritocratic outcomes.<n>We propose mitigation strategies, including counterfactual fairness testing, capital-aware auditing, and regulatory interventions, to disrupt this self-reinforcing inequality.
arXiv Detail & Related papers (2025-07-12T10:03:20Z) - Gaze-Aware AI: Mathematical modeling of epistemic experience of the Marginalized for Human-Computer Interaction & AI Systems [0.0]
This paper demonstrates an attempt to quantify, the human conditioning to subconsciously modify authentic self-expression to fit the norms of the dominant culture.<n>The effects of gaze are studied through analyzing a few redacted Reddit posts, only to be discussed in discourse and not endorsement.
arXiv Detail & Related papers (2025-07-06T20:55:18Z) - Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto [3.7414804164475983]
Increasing interest in ensuring the safety of next-generation Artificial Intelligence (AI) systems calls for novel approaches to embedding morality into autonomous agents.
We provide a systematization of existing approaches to the problem of introducing morality in machines - modelled as a continuum.
We argue that more hybrid solutions are needed to create adaptable and robust, yet controllable and interpretable agentic systems.
arXiv Detail & Related papers (2023-12-04T11:46:34Z) - A computational framework of human values for ethical AI [3.5027291542274357]
values provide a means to engineer ethical AI.
No formal, computational definition of values has yet been proposed.
We address this through a formal conceptual framework rooted in the social sciences.
arXiv Detail & Related papers (2023-05-04T11:35:41Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Rethinking Fairness: An Interdisciplinary Survey of Critiques of
Hegemonic ML Fairness Approaches [0.0]
This survey article assesses and compares critiques of current fairness-enhancing technical interventions into machine learning (ML)
It draws from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies.
The article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
arXiv Detail & Related papers (2022-05-06T14:27:57Z) - Why we need biased AI -- How including cognitive and ethical machine
biases can enhance AI systems [0.0]
We argue for the structurewise implementation of human cognitive biases in learning algorithms.
In order to achieve ethical machine behavior, filter mechanisms have to be applied.
This paper is the first tentative step to explicitly pursue the idea of a re-evaluation of the ethical significance of machine biases.
arXiv Detail & Related papers (2022-03-18T12:39:35Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.