Dynamic Environment Responsive Online Meta-Learning with Fairness
Awareness
- URL: http://arxiv.org/abs/2402.12319v1
- Date: Mon, 19 Feb 2024 17:44:35 GMT
- Title: Dynamic Environment Responsive Online Meta-Learning with Fairness
Awareness
- Authors: Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen
- Abstract summary: We introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML.
Our experimental evaluation on various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches.
- Score: 30.44174123736964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fairness-aware online learning framework has emerged as a potent tool
within the context of continuous lifelong learning. In this scenario, the
learner's objective is to progressively acquire new tasks as they arrive over
time, while also guaranteeing statistical parity among various protected
sub-populations, such as race and gender, when it comes to the newly introduced
tasks. A significant limitation of current approaches lies in their heavy
reliance on the i.i.d (independent and identically distributed) assumption
concerning data, leading to a static regret analysis of the framework.
Nevertheless, it's crucial to note that achieving low static regret does not
necessarily translate to strong performance in dynamic environments
characterized by tasks sampled from diverse distributions. In this paper, to
tackle the fairness-aware online learning challenge in evolving settings, we
introduce a unique regret measure, FairSAR, by incorporating long-term fairness
constraints into a strongly adapted loss regret framework. Moreover, to
determine an optimal model parameter at each time step, we introduce an
innovative adaptive fairness-aware online meta-learning algorithm, referred to
as FairSAOML. This algorithm possesses the ability to adjust to dynamic
environments by effectively managing bias control and model accuracy. The
problem is framed as a bi-level convex-concave optimization, considering both
the model's primal and dual parameters, which pertain to its accuracy and
fairness attributes, respectively. Theoretical analysis yields sub-linear upper
bounds for both loss regret and the cumulative violation of fairness
constraints. Our experimental evaluation on various real-world datasets in
dynamic environments demonstrates that our proposed FairSAOML algorithm
consistently outperforms alternative approaches rooted in the most advanced
prior online learning methods.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Towards Fair Disentangled Online Learning for Changing Environments [28.207499975916324]
We argue that changing environments in online learning can be attributed to partial changes in learned parameters that are specific to environments.
We propose a novel algorithm under the assumption that data collected at each time can be disentangled with two representations.
A novel regret is proposed in which it takes a mixed form of dynamic and static regret metrics followed by a fairness-aware long-term constraint.
arXiv Detail & Related papers (2023-05-31T19:04:16Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Adaptive Fairness-Aware Online Meta-Learning for Changing Environments [29.073555722548956]
The fairness-aware online learning framework has arisen as a powerful tool for the continual lifelong learning setting.
Existing methods make heavy use of the i.i.d assumption for data and hence provide static regret analysis for the framework.
We propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.
arXiv Detail & Related papers (2022-05-20T15:29:38Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - Dynamic Regret Analysis for Online Meta-Learning [0.0]
The online meta-learning framework has arisen as a powerful tool for the continual lifelong learning setting.
This formulation involves two levels: outer level which learns meta-learners and inner level which learns task-specific models.
We establish performance in terms of dynamic regret which handles changing environments from a global prospective.
We carry out our analyses in a setting, and in expectation prove a logarithmic local dynamic regret which explicitly depends on the total number of iterations.
arXiv Detail & Related papers (2021-09-29T12:12:59Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.