BM-CL: Bias Mitigation through the lens of Continual Learning
- URL: http://arxiv.org/abs/2509.01730v1
- Date: Mon, 01 Sep 2025 19:23:24 GMT
- Title: BM-CL: Bias Mitigation through the lens of Continual Learning
- Authors: Lucas Mansilla, Rodrigo Echeveste, Camila Gonzalez, Diego H. Milone, Enzo Ferrante,
- Abstract summary: This study introduces Bias Mitigation through Continual Learning (BM-CL), a novel framework that leverages the principles of continual learning to address this trade-off.<n>We postulate that mitigating bias is conceptually similar to domain-incremental continual learning, where the model must adjust to changing fairness conditions.<n>Our approach bridges the fields of fairness and continual learning, offering a promising pathway for developing machine learning systems that are both equitable and effective.
- Score: 3.678971843192423
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Biases in machine learning pose significant challenges, particularly when models amplify disparities that affect disadvantaged groups. Traditional bias mitigation techniques often lead to a {\itshape leveling-down effect}, whereby improving outcomes of disadvantaged groups comes at the expense of reduced performance for advantaged groups. This study introduces Bias Mitigation through Continual Learning (BM-CL), a novel framework that leverages the principles of continual learning to address this trade-off. We postulate that mitigating bias is conceptually similar to domain-incremental continual learning, where the model must adjust to changing fairness conditions, improving outcomes for disadvantaged groups without forgetting the knowledge that benefits advantaged groups. Drawing inspiration from techniques such as Learning without Forgetting and Elastic Weight Consolidation, we reinterpret bias mitigation as a continual learning problem. This perspective allows models to incrementally balance fairness objectives, enhancing outcomes for disadvantaged groups while preserving performance for advantaged groups. Experiments on synthetic and real-world image datasets, characterized by diverse sources of bias, demonstrate that the proposed framework mitigates biases while minimizing the loss of original knowledge. Our approach bridges the fields of fairness and continual learning, offering a promising pathway for developing machine learning systems that are both equitable and effective.
Related papers
- Your Group-Relative Advantage Is Biased [74.57406620907797]
Group-based learning methods rely on group-relative advantage estimation to avoid learned critics.<n>In this work, we uncover a fundamental issue of group-based RL: the group-relative advantage estimator is inherently biased relative to the true (expected) advantage.<n>We propose History-Aware Adaptive Difficulty Weighting (HA-DW), an adaptive reweighting scheme that adjusts advantage estimates based on an evolving difficulty anchor and training dynamics.
arXiv Detail & Related papers (2026-01-13T13:03:15Z) - Learning Fair Representations with Kolmogorov-Arnold Networks [0.08594140167290099]
Predictive models often exhibit discriminatory behavior towards marginalized groups.<n>Existing fair learning models aim to mitigate bias, but achieving an optimal trade-off between fairness and accuracy remains a challenge.<n>We propose integrating Kolmogorov-Arnold Networks (KANs) within a fair adversarial learning framework.
arXiv Detail & Related papers (2025-11-14T07:51:56Z) - FairContrast: Enhancing Fairness through Contrastive learning and Customized Augmenting Methods on Tabular Data [2.51657752676152]
As AI systems become more embedded in everyday life, the development of fair and unbiased models becomes more critical.<n>We introduce a contrastive learning framework specifically designed to address bias and learn fair representations in datasets.<n>Our results demonstrate the efficacy of our approach in mitigating bias with minimum trade-off in accuracy and leveraging the learned fair representations in various downstream tasks.
arXiv Detail & Related papers (2025-10-02T13:43:53Z) - Paying Alignment Tax with Contrastive Learning [6.232983467016873]
Current debiasing approaches often result in a degradation in model capabilities such as factual accuracy and knowledge retention.<n>We propose a contrastive learning framework that learns through carefully constructed positive and negative examples.
arXiv Detail & Related papers (2025-05-25T21:26:18Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.<n>Our approach effectively mitigates Catastrophic Forgetting, outperforming strong Variational CL methods.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Time-Series Contrastive Learning against False Negatives and Class Imbalance [17.43801009251228]
We conduct theoretical analysis and find they have overlooked the fundamental issues: false negatives and class imbalance inherent in the InfoNCE loss-based framework.
We introduce a straightforward modification grounded in the SimCLR framework, universally to models engaged in the instance discrimination task.
We perform semi-supervised consistency classification and enhance the representative ability of minority classes.
arXiv Detail & Related papers (2023-12-19T08:38:03Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Integrating Prior Knowledge in Contrastive Learning with Kernel [4.050766659420731]
We use kernel theory to propose a novel loss, called decoupled uniformity, that i) allows the integration of prior knowledge and ii) removes the negative-positive coupling in the original InfoNCE loss.
In an unsupervised setting, we empirically demonstrate that CL benefits from generative models to improve its representation both on natural and medical images.
arXiv Detail & Related papers (2022-06-03T15:43:08Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z) - Fairness in Forecasting and Learning Linear Dynamical Systems [10.762748665074794]
We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems.
In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system from multiple trajectories of varying lengths.
arXiv Detail & Related papers (2020-06-12T16:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.