A Fairness-Oriented Reinforcement Learning Approach for the Operation and Control of Shared Micromobility Services
- URL: http://arxiv.org/abs/2403.15780v1
- Date: Sat, 23 Mar 2024 09:32:23 GMT
- Title: A Fairness-Oriented Reinforcement Learning Approach for the Operation and Control of Shared Micromobility Services
- Authors: Luca Vittorio Piron, Matteo Cederle, Marina Ceccon, Federico Chiariotti, Alessandro Fabris, Marco Fabris, Gian Antonio Susto,
- Abstract summary: We introduce a pioneering investigation into the balance between performance optimization and algorithmic fairness in the operation and control of Shared Micromobility Services.
Our methodology stands out for its ability to achieve equitable outcomes, as measured by the Gini index, across different station categories.
This paper underscores the critical importance of fairness considerations in shaping control strategies for Shared Micromobility Services.
- Score: 46.1428063182192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Machine Learning systems become increasingly popular across diverse application domains, including those with direct human implications, the imperative of equity and algorithmic fairness has risen to prominence in the Artificial Intelligence community. On the other hand, in the context of Shared Micromobility Systems, the exploration of fairness-oriented approaches remains limited. Addressing this gap, we introduce a pioneering investigation into the balance between performance optimization and algorithmic fairness in the operation and control of Shared Micromobility Services. Our study leverages the Q-Learning algorithm in Reinforcement Learning, benefiting from its convergence guarantees to ensure the robustness of our proposed approach. Notably, our methodology stands out for its ability to achieve equitable outcomes, as measured by the Gini index, across different station categories--central, peripheral, and remote. Through strategic rebalancing of vehicle distribution, our approach aims to maximize operator performance while simultaneously upholding fairness principles for users. In addition to theoretical insights, we substantiate our findings with a case study or simulation based on synthetic data, validating the efficacy of our approach. This paper underscores the critical importance of fairness considerations in shaping control strategies for Shared Micromobility Services, offering a pragmatic framework for enhancing equity in urban transportation systems.
Related papers
- Benchmarking Mutual Information-based Loss Functions in Federated Learning [2.79786165508341]
Federated Learning (FL) has attracted considerable interest due to growing privacy regulations.
This paper examines the use of Mutual Information (MI)-based loss functions to address these concerns.
arXiv Detail & Related papers (2025-04-16T08:58:44Z) - Fairness in Reinforcement Learning with Bisimulation Metrics [45.674943127750595]
By maximizing their reward without consideration of fairness, AI agents can introduce disparities in their treatment of groups or individuals.
We propose a novel approach that leverages bisimulation metrics to learn reward functions and observation dynamics.
arXiv Detail & Related papers (2024-12-22T18:23:06Z) - Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - LOQA: Learning with Opponent Q-Learning Awareness [1.1666234644810896]
We introduce Learning with Opponent Q-Learning Awareness (LOQA), a decentralized reinforcement learning algorithm tailored to optimize an agent's individual utility.
LOQA achieves state-of-the-art performance in benchmark scenarios such as the Iterated Prisoner's Dilemma and the Coin Game.
arXiv Detail & Related papers (2024-05-02T06:33:01Z) - Augmenting Unsupervised Reinforcement Learning with Self-Reference [63.68018737038331]
Humans possess the ability to draw on past experiences explicitly when learning new tasks.
We propose the Self-Reference (SR) approach, an add-on module explicitly designed to leverage historical information.
Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark.
arXiv Detail & Related papers (2023-11-16T09:07:34Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - FAL-CUR: Fair Active Learning using Uncertainty and Representativeness
on Fair Clustering [16.808400593594435]
We propose a novel strategy, named Fair Active Learning using fair Clustering, Uncertainty, and Representativeness (FAL-CUR)
FAL-CUR achieves a 15% - 20% improvement in fairness compared to the best state-of-the-art method in terms of equalized odds.
An ablation study highlights the crucial roles of fair clustering in preserving fairness and the acquisition function in stabilizing the accuracy performance.
arXiv Detail & Related papers (2022-09-21T08:28:43Z) - Fair and Consistent Federated Learning [48.19977689926562]
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
arXiv Detail & Related papers (2021-08-19T01:56:08Z) - LiMIIRL: Lightweight Multiple-Intent Inverse Reinforcement Learning [5.1779694507922835]
Multiple-Intent Inverse Reinforcement Learning seeks to find a reward function ensemble to rationalize demonstrations of different but unlabelled intents.
We present a warm-start strategy based on up-front clustering of the demonstrations in feature space.
We also propose a MI-IRL performance metric that generalizes the popular Expected Value Difference measure.
arXiv Detail & Related papers (2021-06-03T12:00:38Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.