Discrimination and Class Imbalance Aware Online Naive Bayes
- URL: http://arxiv.org/abs/2211.04812v1
- Date: Wed, 9 Nov 2022 11:20:19 GMT
- Title: Discrimination and Class Imbalance Aware Online Naive Bayes
- Authors: Maryam Badar, Marco Fisichella, Vasileios Iosifidis, Wolfgang Nejdl
- Abstract summary: Stream learning algorithms are used to replace humans at critical decision-making points.
Recent discrimination-aware learning methods are optimized based on overall accuracy.
We propose a novel adaptation of Na"ive Bayes to mitigate discrimination embedded in the streams.
- Score: 5.065947993017157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness-aware mining of massive data streams is a growing and challenging
concern in the contemporary domain of machine learning. Many stream learning
algorithms are used to replace humans at critical decision-making points e.g.,
hiring staff, assessing credit risk, etc. This calls for handling massive
incoming information with minimum response delay while ensuring fair and high
quality decisions. Recent discrimination-aware learning methods are optimized
based on overall accuracy. However, the overall accuracy is biased in favor of
the majority class; therefore, state-of-the-art methods mainly diminish
discrimination by partially or completely ignoring the minority class. In this
context, we propose a novel adaptation of Na\"ive Bayes to mitigate
discrimination embedded in the streams while maintaining high predictive
performance for both the majority and minority classes. Our proposed algorithm
is simple, fast, and attains multi-objective optimization goals. To handle
class imbalance and concept drifts, a dynamic instance weighting module is
proposed, which gives more importance to recent instances and less importance
to obsolete instances based on their membership in minority or majority class.
We conducted experiments on a range of streaming and static datasets and
deduced that our proposed methodology outperforms existing state-of-the-art
fairness-aware methods in terms of both discrimination score and balanced
accuracy.
Related papers
- Exploring Vacant Classes in Label-Skewed Federated Learning [113.65301899666645]
Label skews, characterized by disparities in local label distribution across clients, pose a significant challenge in federated learning.
This paper introduces FedVLS, a novel approach to label-skewed federated learning that integrates vacant-class distillation and logit suppression simultaneously.
arXiv Detail & Related papers (2024-01-04T16:06:31Z) - Adversarial Reweighting Guided by Wasserstein Distance for Bias
Mitigation [24.160692009892088]
Under-representation of minorities in the data makes the disparate treatment of subpopulations difficult to deal with during learning.
We propose a novel adversarial reweighting method to address such emphrepresentation bias.
arXiv Detail & Related papers (2023-11-21T15:46:11Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Imbalanced Classification via Explicit Gradient Learning From Augmented
Data [0.0]
We propose a novel deep meta-learning technique to augment a given imbalanced dataset with new minority instances.
The advantage of the proposed method is demonstrated on synthetic and real-world datasets with various imbalance ratios.
arXiv Detail & Related papers (2022-02-21T22:16:50Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Fairness-aware Class Imbalanced Learning [57.45784950421179]
We evaluate long-tail learning methods for tweet sentiment and occupation classification.
We extend a margin-loss based approach with methods to enforce fairness.
arXiv Detail & Related papers (2021-09-21T22:16:30Z) - Online Fairness-Aware Learning with Imbalanced Data Streams [9.481178205985396]
We propose ours, an online fairness-aware approach that maintains a valid and fair classifier over the stream.
oursis an online boosting approach that changes the training distribution in an online fashion by monitoring stream's class imbalance.
Experiments on 8 real-world and 1 synthetic datasets demonstrate the superiority of our method over state-of-the-art fairness-aware stream approaches.
arXiv Detail & Related papers (2021-08-13T13:31:42Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z) - DeBayes: a Bayesian Method for Debiasing Network Embeddings [16.588468396705366]
We propose DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior.
Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics.
arXiv Detail & Related papers (2020-02-26T12:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.