Debiasing Methods for Fairer Neural Models in Vision and Language
Research: A Survey
- URL: http://arxiv.org/abs/2211.05617v1
- Date: Thu, 10 Nov 2022 14:42:46 GMT
- Title: Debiasing Methods for Fairer Neural Models in Vision and Language
Research: A Survey
- Authors: Ot\'avio Parraga, Martin D. More, Christian M. Oliveira, Nathan S.
Gavenski, Lucas S. Kupssinsk\"u, Adilson Medronha, Luis V. Moura, Gabriel S.
Sim\~oes, Rodrigo C. Barros
- Abstract summary: We provide an in-depth overview of the main debiasing methods for fairness-aware neural networks.
We propose a novel taxonomy to better organize the literature on debiasing methods for fairness.
- Score: 3.4767443062432326
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite being responsible for state-of-the-art results in several computer
vision and natural language processing tasks, neural networks have faced harsh
criticism due to some of their current shortcomings. One of them is that neural
networks are correlation machines prone to model biases within the data instead
of focusing on actual useful causal relationships. This problem is particularly
serious in application domains affected by aspects such as race, gender, and
age. To prevent models from incurring on unfair decision-making, the AI
community has concentrated efforts in correcting algorithmic biases, giving
rise to the research area now widely known as fairness in AI. In this survey
paper, we provide an in-depth overview of the main debiasing methods for
fairness-aware neural networks in the context of vision and language research.
We propose a novel taxonomy to better organize the literature on debiasing
methods for fairness, and we discuss the current challenges, trends, and
important future work directions for the interested researcher and
practitioner.
Related papers
- Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - The Pursuit of Fairness in Artificial Intelligence Models: A Survey [2.124791625488617]
This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems.
A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models.
We also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models.
arXiv Detail & Related papers (2024-03-26T02:33:36Z) - A Survey on Knowledge Editing of Neural Networks [43.813073385305806]
Even the largest artificial neural networks make mistakes, and once-correct predictions can become invalid as the world progresses in time.
Knowledge editing is emerging as a novel area of research that aims to enable reliable, data-efficient, and fast changes to a pre-trained target model.
We first introduce the problem of editing neural networks, formalize it in a common framework and differentiate it from more notorious branches of research such as continuous learning.
arXiv Detail & Related papers (2023-10-30T16:29:47Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - FairSNA: Algorithmic Fairness in Social Network Analysis [17.39106091928567]
We highlight how the structural bias of social networks impacts the fairness of different methods.
We discuss fairness aspects that should be considered while proposing network structure-based solutions for different SNA problems.
We highlight various open research directions that require researchers' attention to bridge the gap between fairness and SNA.
arXiv Detail & Related papers (2022-09-04T19:11:38Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Bayesian Neural Networks: An Introduction and Survey [22.018605089162204]
This article introduces Bayesian Neural Networks (BNNs) and the seminal research regarding their implementation.
Different approximate inference methods are compared, and used to highlight where future research can improve on current methods.
arXiv Detail & Related papers (2020-06-22T06:30:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.