Towards Fair Disentangled Online Learning for Changing Environments
- URL: http://arxiv.org/abs/2306.01007v2
- Date: Mon, 17 Jul 2023 02:57:27 GMT
- Title: Towards Fair Disentangled Online Learning for Changing Environments
- Authors: Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Christan
Grant, Feng Chen
- Abstract summary: We argue that changing environments in online learning can be attributed to partial changes in learned parameters that are specific to environments.
We propose a novel algorithm under the assumption that data collected at each time can be disentangled with two representations.
A novel regret is proposed in which it takes a mixed form of dynamic and static regret metrics followed by a fairness-aware long-term constraint.
- Score: 28.207499975916324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the problem of online learning for changing environments, data are
sequentially received one after another over time, and their distribution
assumptions may vary frequently. Although existing methods demonstrate the
effectiveness of their learning algorithms by providing a tight bound on either
dynamic regret or adaptive regret, most of them completely ignore learning with
model fairness, defined as the statistical parity across different
sub-population (e.g., race and gender). Another drawback is that when adapting
to a new environment, an online learner needs to update model parameters with a
global change, which is costly and inefficient. Inspired by the sparse
mechanism shift hypothesis, we claim that changing environments in online
learning can be attributed to partial changes in learned parameters that are
specific to environments and the rest remain invariant to changing
environments. To this end, in this paper, we propose a novel algorithm under
the assumption that data collected at each time can be disentangled with two
representations, an environment-invariant semantic factor and an
environment-specific variation factor. The semantic factor is further used for
fair prediction under a group fairness constraint. To evaluate the sequence of
model parameters generated by the learner, a novel regret is proposed in which
it takes a mixed form of dynamic and static regret metrics followed by a
fairness-aware long-term constraint. The detailed analysis provides theoretical
guarantees for loss regret and violation of cumulative fairness constraints.
Empirical evaluations on real-world datasets demonstrate our proposed method
sequentially outperforms baseline methods in model accuracy and fairness.
Related papers
- Counterfactual Fairness through Transforming Data Orthogonal to Bias [7.109458605736819]
We propose a novel data pre-processing algorithm, Orthogonal to Bias (OB)
OB is designed to eliminate the influence of a group of continuous sensitive variables, thus promoting counterfactual fairness in machine learning applications.
OB is model-agnostic, making it applicable to a wide range of machine learning models and tasks.
arXiv Detail & Related papers (2024-03-26T16:40:08Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Dynamic Environment Responsive Online Meta-Learning with Fairness
Awareness [30.44174123736964]
We introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML.
Our experimental evaluation on various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches.
arXiv Detail & Related papers (2024-02-19T17:44:35Z) - Adaptive Robust Learning using Latent Bernoulli Variables [50.223140145910904]
We present an adaptive approach for learning from corrupted training sets.
We identify corrupted non-corrupted samples with latent Bernoulli variables.
The resulting problem is solved via variational inference.
arXiv Detail & Related papers (2023-12-01T13:50:15Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Environment Invariant Linear Least Squares [18.387614531869826]
This paper considers a multi-environment linear regression model in which data from multiple experimental settings are collected.
We construct a novel environment invariant linear least squares (EILLS) objective function, a multi-environment version of linear least-squares regression.
arXiv Detail & Related papers (2023-03-06T13:10:54Z) - Adaptive Fairness-Aware Online Meta-Learning for Changing Environments [29.073555722548956]
The fairness-aware online learning framework has arisen as a powerful tool for the continual lifelong learning setting.
Existing methods make heavy use of the i.i.d assumption for data and hence provide static regret analysis for the framework.
We propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.
arXiv Detail & Related papers (2022-05-20T15:29:38Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.