FairDRL-ST: Disentangled Representation Learning for Fair Spatio-Temporal Mobility Prediction
- URL: http://arxiv.org/abs/2508.07518v1
- Date: Mon, 11 Aug 2025 00:36:19 GMT
- Title: FairDRL-ST: Disentangled Representation Learning for Fair Spatio-Temporal Mobility Prediction
- Authors: Sichen Zhao, Wei Shao, Jeffrey Chan, Ziqi Xu, Flora Salim,
- Abstract summary: Deep-temporal neural networks are increasingly utilised in urban computing contexts.<n>We propose a novel framework, FairDRL-ST, based on disentangled representation learning to address fairness concerns in-temporal prediction.<n>By leveraging adversarial learning and disentangled representation learning our framework learns to separate attributes that contain sensitive information.
- Score: 7.126500197418756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep spatio-temporal neural networks are increasingly utilised in urban computing contexts, the deployment of such methods can have a direct impact on users of critical urban infrastructure, such as public transport, emergency services, and traffic management systems. While many spatio-temporal methods focus on improving accuracy, fairness has recently gained attention due to growing evidence that biased predictions in spatio-temporal applications can disproportionately disadvantage certain demographic or geographic groups, thereby reinforcing existing socioeconomic inequalities and undermining the ethical deployment of AI in public services. In this paper, we propose a novel framework, FairDRL-ST, based on disentangled representation learning, to address fairness concerns in spatio-temporal prediction, with a particular focus on mobility demand forecasting. By leveraging adversarial learning and disentangled representation learning, our framework learns to separate attributes that contain sensitive information. Unlike existing methods that enforce fairness through supervised learning, which may lead to overcompensation and degraded performance, our framework achieves fairness in an unsupervised manner with minimal performance loss. We apply our framework to real-world urban mobility datasets and demonstrate its ability to close fairness gaps while delivering competitive predictive performance compared to state-of-the-art fairness-aware methods.
Related papers
- Learning Fair Representations with Kolmogorov-Arnold Networks [0.08594140167290099]
Predictive models often exhibit discriminatory behavior towards marginalized groups.<n>Existing fair learning models aim to mitigate bias, but achieving an optimal trade-off between fairness and accuracy remains a challenge.<n>We propose integrating Kolmogorov-Arnold Networks (KANs) within a fair adversarial learning framework.
arXiv Detail & Related papers (2025-11-14T07:51:56Z) - Adversarial Bias: Data Poisoning Attacks on Fairness [48.17618627431355]
There is relatively little research on how an AI system's fairness can be intentionally compromised.<n>In this work, we provide a theoretical analysis demonstrating that a simple adversarial poisoning strategy is sufficient to induce maximally unfair behavior.<n>Our attack significantly outperforms existing methods in degrading fairness metrics across multiple models and datasets.
arXiv Detail & Related papers (2025-11-11T15:09:53Z) - Fairness in Federated Learning: Trends, Challenges, and Opportunities [12.707158627881968]
Federated Learning (FL) with its distributed architecture stands at the forefront in a bid to facilitate collaborative model training across multiple clients.<n>However, fairness concerns arise from numerous sources of heterogeneity that can result in biases and undermine a system's effectiveness.<n>This survey thus explores the diverse sources of bias, including but not limited to, data, client, and model biases, and thoroughly discusses the strengths and limitations inherited within the array of state-of-the-art techniques utilized in the literature to mitigate such disparities in the FL training process.
arXiv Detail & Related papers (2025-08-31T11:16:16Z) - Fairness Overfitting in Machine Learning: An Information-Theoretic Perspective [28.68227117674221]
This paper proposes a theoretical framework for analyzing fairness generalization error through an information-theoretic lens.<n>Our empirical results validate the tightness and practical relevance of these bounds across diverse fairness-aware learning algorithms.
arXiv Detail & Related papers (2025-06-09T15:24:56Z) - Deep Fair Learning: A Unified Framework for Fine-tuning Representations with Sufficient Networks [8.616743904155419]
We propose a framework that integrates sufficient dimension reduction with deep learning to construct fair and informative representations.<n>By introducing a novel penalty term during fine-tuning, our method enforces conditional independence between sensitive attributes and learned representations.<n>Our approach achieves a superior balance between fairness and utility, significantly outperforming state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-08T22:24:22Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.<n>Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness [8.958956425857878]
We argue that machine learning risks reinforcing biases present in data and in what is absent from data.<n>The way we address missingness in healthcare can have detrimental impacts on algorithmic fairness.<n>We propose a framework for empirically guiding imputation choices, and an accompanying reporting framework.
arXiv Detail & Related papers (2022-08-13T13:34:05Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Learning Representations that Support Extrapolation [39.84463809100903]
We consider the challenge of learning representations that support extrapolation.
We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation.
We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects.
arXiv Detail & Related papers (2020-07-09T20:53:45Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.