Deployment Optimization for Shared e-Mobility Systems with Multi-agent
Deep Neural Search
- URL: http://arxiv.org/abs/2111.02149v1
- Date: Wed, 3 Nov 2021 11:37:11 GMT
- Title: Deployment Optimization for Shared e-Mobility Systems with Multi-agent
Deep Neural Search
- Authors: Man Luo, Bowen Du, Konstantin Klemmer, Hongming Zhu, Hongkai Wen
- Abstract summary: Shared e-mobility services have been widely tested and piloted in cities across the globe.
This paper studies how to deploy and manage their infrastructure across space and time, so that the services are ubiquitous to the users while in sustainable profitability.
We tackle this by designing a high-fidelity simulation environment, which abstracts the key operation details of the shared e-mobility systems at fine-granularity.
- Score: 15.657420177295624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shared e-mobility services have been widely tested and piloted in cities
across the globe, and already woven into the fabric of modern urban planning.
This paper studies a practical yet important problem in those systems: how to
deploy and manage their infrastructure across space and time, so that the
services are ubiquitous to the users while sustainable in profitability.
However, in real-world systems evaluating the performance of different
deployment strategies and then finding the optimal plan is prohibitively
expensive, as it is often infeasible to conduct many iterations of
trial-and-error. We tackle this by designing a high-fidelity simulation
environment, which abstracts the key operation details of the shared e-mobility
systems at fine-granularity, and is calibrated using data collected from the
real-world. This allows us to try out arbitrary deployment plans to learn the
optimal given specific context, before actually implementing any in the
real-world systems. In particular, we propose a novel multi-agent neural search
approach, in which we design a hierarchical controller to produce tentative
deployment plans. The generated deployment plans are then tested using a
multi-simulation paradigm, i.e., evaluated in parallel, where the results are
used to train the controller with deep reinforcement learning. With this closed
loop, the controller can be steered to have higher probability of generating
better deployment plans in future iterations. The proposed approach has been
evaluated extensively in our simulation environment, and experimental results
show that it outperforms baselines e.g., human knowledge, and state-of-the-art
heuristic-based optimization approaches in both service coverage and net
revenue.
Related papers
- Differentiable Discrete Event Simulation for Queuing Network Control [7.965453961211742]
Queueing network control poses distinct challenges, including highity, large state and action spaces, and lack of stability.
We propose a scalable framework for policy optimization based on differentiable discrete event simulation.
Our methods can flexibly handle realistic scenarios, including systems operating in non-stationary environments.
arXiv Detail & Related papers (2024-09-05T17:53:54Z) - Learning-Initialized Trajectory Planning in Unknown Environments [4.2960463890487555]
Planning for autonomous flight in unknown environments requires precise planning for both the spatial and temporal trajectories.
We introduce a novel approach that guides optimization using a Neural-d Trajectory Planner.
We propose a framework that supports robust online replanning with tolerance to planning latency.
arXiv Detail & Related papers (2023-09-19T15:07:26Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Finding Needles in Haystack: Formal Generative Models for Efficient
Massive Parallel Simulations [0.0]
Authors propose a method based on bayesian optimization to efficiently learn generative models on scenarios that would deliver desired outcomes.
The methodology is integrated in an end-to-end framework, which uses the OpenSCENARIO standard to describe scenarios.
arXiv Detail & Related papers (2023-01-03T16:55:06Z) - Estimating the Robustness of Public Transport Systems Using Machine
Learning [62.997667081978825]
Planning public transport systems is a highly complex process involving many steps.
Integrating robustness from a passenger's point of view makes the task even more challenging.
In this paper, we explore a new way of such a scenario-based robustness approximation by using methods from machine learning.
arXiv Detail & Related papers (2021-06-10T05:52:56Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Learning to Plan Optimally with Flow-based Motion Planner [29.124322674133]
We introduce a conditional normalising flow based distribution learned through previous experiences to improve sampling of these methods.
Our distribution can be conditioned on the current problem instance to provide an informative prior for sampling configurations within promising regions.
By using our normalising flow based distribution, a solution can be found faster, with less samples and better overall runtime performance.
arXiv Detail & Related papers (2020-10-21T21:46:08Z) - From Simulation to Real World Maneuver Execution using Deep
Reinforcement Learning [69.23334811890919]
Deep Reinforcement Learning has proved to be able to solve many control tasks in different fields, but the behavior of these systems is not always as expected when deployed in real-world scenarios.
This is mainly due to the lack of domain adaptation between simulated and real-world data together with the absence of distinction between train and test datasets.
We present a system based on multiple environments in which agents are trained simultaneously, evaluating the behavior of the model in different scenarios.
arXiv Detail & Related papers (2020-05-13T14:22:20Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.