Fairness in Forecasting of Observations of Linear Dynamical Systems
- URL: http://arxiv.org/abs/2209.05274v4
- Date: Mon, 15 May 2023 20:13:53 GMT
- Title: Fairness in Forecasting of Observations of Linear Dynamical Systems
- Authors: Quan Zhou, Jakub Marecek, Robert N. Shorten
- Abstract summary: We introduce two natural notions of fairness in time-series forecasting problems: fairness and instantaneous fairness.
We show globally convergent methods for optimisation of fairness-constrained learning problems.
Our results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
- Score: 10.762748665074794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In machine learning, training data often capture the behaviour of multiple
subgroups of some underlying human population. This behaviour can often be
modelled as observations of an unknown dynamical system with an unobserved
state. When the training data for the subgroups are not controlled carefully,
however, under-representation bias arises. To counter under-representation
bias, we introduce two natural notions of fairness in time-series forecasting
problems: subgroup fairness and instantaneous fairness. These notions extend
predictive parity to the learning of dynamical systems. We also show globally
convergent methods for the fairness-constrained learning problems using
hierarchies of convexifications of non-commutative polynomial optimisation
problems. We also show that by exploiting sparsity in the convexifications, we
can reduce the run time of our methods considerably. Our empirical results on a
biased data set motivated by insurance applications and the well-known COMPAS
data set demonstrate the efficacy of our methods.
Related papers
- A Mathematical Model of the Hidden Feedback Loop Effect in Machine Learning Systems [44.99833362998488]
We introduce a repeated learning process to jointly describe several phenomena attributed to unintended hidden feedback loops.
A distinctive feature of such repeated learning setting is that the state of the environment becomes causally dependent on the learner itself over time.
We present a novel dynamical systems model of the repeated learning process and prove the limiting set of probability distributions for positive and negative feedback loop modes.
arXiv Detail & Related papers (2024-05-04T17:57:24Z) - Pessimistic Causal Reinforcement Learning with Mediators for Confounded Offline Data [17.991833729722288]
We propose a novel policy learning algorithm, PESsimistic CAusal Learning (PESCAL)
Our key observation is that, by incorporating auxiliary variables that mediate the effect of actions on system dynamics, it is sufficient to learn a lower bound of the mediator distribution function, instead of the Q-function.
We provide theoretical guarantees for the algorithms we propose, and demonstrate their efficacy through simulations, as well as real-world experiments utilizing offline datasets from a leading ride-hailing platform.
arXiv Detail & Related papers (2024-03-18T14:51:19Z) - Learning invariant representations of time-homogeneous stochastic dynamical systems [27.127773672738535]
We study the problem of learning a representation of the state that faithfully captures its dynamics.
This is instrumental to learning the transfer operator or the generator of the system.
We show that the search for a good representation can be cast as an optimization problem over neural networks.
arXiv Detail & Related papers (2023-07-19T11:32:24Z) - A Unified Framework of Policy Learning for Contextual Bandit with
Confounding Bias and Missing Observations [108.89353070722497]
We study the offline contextual bandit problem, where we aim to acquire an optimal policy using observational data.
We present a new algorithm called Causal-Adjusted Pessimistic (CAP) policy learning, which forms the reward function as the solution of an integral equation system.
arXiv Detail & Related papers (2023-03-20T15:17:31Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z) - Fairness in Forecasting and Learning Linear Dynamical Systems [10.762748665074794]
We introduce two natural notions of subgroup fairness and instantaneous fairness to address such under-representation bias in time-series forecasting problems.
In particular, we consider the subgroup-fair and instant-fair learning of a linear dynamical system from multiple trajectories of varying lengths.
arXiv Detail & Related papers (2020-06-12T16:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.