Equality of opportunity in travel behavior prediction with deep neural
networks and discrete choice models
- URL: http://arxiv.org/abs/2109.12422v1
- Date: Sat, 25 Sep 2021 19:02:23 GMT
- Title: Equality of opportunity in travel behavior prediction with deep neural
networks and discrete choice models
- Authors: Yunhan Zheng, Shenhao Wang, Jinhua Zhao
- Abstract summary: This study introduces an important missing dimension - computational fairness - to travel behavior analysis.
We first operationalize computational fairness by equality of opportunity, then differentiate between the bias inherent in data and the bias introduced by modeling.
- Score: 3.4806267677524896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although researchers increasingly adopt machine learning to model travel
behavior, they predominantly focus on prediction accuracy, ignoring the ethical
challenges embedded in machine learning algorithms. This study introduces an
important missing dimension - computational fairness - to travel behavior
analysis. We first operationalize computational fairness by equality of
opportunity, then differentiate between the bias inherent in data and the bias
introduced by modeling. We then demonstrate the prediction disparities in
travel behavior modeling using the 2017 National Household Travel Survey (NHTS)
and the 2018-2019 My Daily Travel Survey in Chicago. Empirically, deep neural
network (DNN) and discrete choice models (DCM) reveal consistent prediction
disparities across multiple social groups: both over-predict the false negative
rate of frequent driving for the ethnic minorities, the low-income and the
disabled populations, and falsely predict a higher travel burden of the
socially disadvantaged groups and the rural populations than reality. Comparing
DNN with DCM, we find that DNN can outperform DCM in prediction disparities
because of DNN's smaller misspecification error. To mitigate prediction
disparities, this study introduces an absolute correlation regularization
method, which is evaluated with synthetic and real-world data. The results
demonstrate the prevalence of prediction disparities in travel behavior
modeling, and the disparities still persist regarding a variety of model
specifics such as the number of DNN layers, batch size and weight
initialization. Since these prediction disparities can exacerbate social
inequity if prediction results without fairness adjustment are used for
transportation policy making, we advocate for careful consideration of the
fairness problem in travel behavior modeling, and the use of bias mitigation
algorithms for fair transport decisions.
Related papers
- The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - DemOpts: Fairness corrections in COVID-19 case prediction models [0.24999074238880484]
We show that state of the art deep learning models output mean prediction errors that are significantly different across racial and ethnic groups.
We propose a novel de-biasing method, DemOpts, to increase the fairness of deep learning based forecasting models trained on potentially biased datasets.
arXiv Detail & Related papers (2024-05-15T16:22:46Z) - A deep causal inference model for fully-interpretable travel behaviour analysis [4.378407481656902]
We present the deep CAusal infeRence mOdel for traveL behavIour aNAlysis (CAROLINA), a framework that explicitly models causality in travel behaviour.
Within this framework, we introduce a Generative Counterfactual model for forecasting human behaviour.
We demonstrate the effectiveness of our proposed models in uncovering causal relationships, prediction accuracy, and assessing policy interventions.
arXiv Detail & Related papers (2024-05-02T20:06:06Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Fairness-enhancing deep learning for ride-hailing demand prediction [3.911105164672852]
Short-term demand forecasting for on-demand ride-hailing services is one of the fundamental issues in intelligent transportation systems.
Previous travel demand forecasting research predominantly focused on improving prediction accuracy, ignoring fairness issues.
This study investigates how to measure, evaluate, and enhance prediction fairness between disadvantaged and privileged communities.
arXiv Detail & Related papers (2023-03-10T04:37:14Z) - Travel Demand Forecasting: A Fair AI Approach [0.9383397937755517]
We propose a novel methodology to develop fairness-aware, highly-accurate travel demand forecasting models.
Specifically, we introduce a new fairness regularization term, which is explicitly designed to measure the correlation between prediction accuracy and protected attributes.
Results highlight that our proposed methodology can effectively enhance fairness for multiple protected attributes while preserving prediction accuracy.
arXiv Detail & Related papers (2023-03-03T03:16:54Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.