Idle Vehicle Relocation Strategy through Deep Learning for Shared
Autonomous Electric Vehicle System Optimization
- URL: http://arxiv.org/abs/2010.09847v1
- Date: Fri, 16 Oct 2020 05:06:58 GMT
- Title: Idle Vehicle Relocation Strategy through Deep Learning for Shared
Autonomous Electric Vehicle System Optimization
- Authors: Seongsin Kim, Ungki Lee, Ikjin Lee, Namwoo Kang
- Abstract summary: This study proposes a deep learning-based algorithm that can instantly predict the optimal solution to idle vehicle relocation problems.
We present an optimal service system including the design of SAEV vehicles and charging stations.
- Score: 2.580765958706854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In optimization of a shared autonomous electric vehicle (SAEV) system, idle
vehicle relocation strategies are important to reduce operation costs and
customers' wait time. However, for an on-demand service, continuous
optimization for idle vehicle relocation is computationally expensive, and
thus, not effective. This study proposes a deep learning-based algorithm that
can instantly predict the optimal solution to idle vehicle relocation problems
under various traffic conditions. The proposed relocation process comprises
three steps. First, a deep learning-based passenger demand prediction model
using taxi big data is built. Second, idle vehicle relocation problems are
solved based on predicted demands, and optimal solution data are collected.
Finally, a deep learning model using the optimal solution data is built to
estimate the optimal strategy without solving relocation. In addition, the
proposed idle vehicle relocation model is validated by applying it to optimize
the SAEV system. We present an optimal service system including the design of
SAEV vehicles and charging stations. Further, we demonstrate that the proposed
strategy can drastically reduce operation costs and wait times for on-demand
services.
Related papers
- Eco-Driving Control of Connected and Automated Vehicles using Neural
Network based Rollout [0.0]
Connected and autonomous vehicles have the potential to minimize energy consumption.
Existing deterministic and methods created to solve the eco-driving problem generally suffer from high computational and memory requirements.
This work proposes a hierarchical multi-horizon optimization framework implemented via a neural network.
arXiv Detail & Related papers (2023-10-16T23:13:51Z) - TranDRL: A Transformer-Driven Deep Reinforcement Learning Enabled Prescriptive Maintenance Framework [58.474610046294856]
Industrial systems demand reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime.
This paper introduces an integrated framework that leverages the capabilities of the Transformer model-based neural networks and deep reinforcement learning (DRL) algorithms to optimize system maintenance actions.
arXiv Detail & Related papers (2023-09-29T02:27:54Z) - Resource Constrained Vehicular Edge Federated Learning with Highly
Mobile Connected Vehicles [41.02566275644629]
We propose a vehicular edge federated learning (VEFL) solution, where an edge server leverages highly mobile connected vehicles' (CVs') onboard central processing units ( CPUs) and local datasets to train a global model.
We devise joint VEFL and radio access technology (RAT) parameters optimization problems under delay, energy and cost constraints to maximize the probability of successful reception of the locally trained models.
arXiv Detail & Related papers (2022-10-27T14:33:06Z) - Improving Operational Efficiency In EV Ridepooling Fleets By Predictive
Exploitation of Idle Times [0.0]
We present a real-time predictive charging method for ridepooling services with a single operator, called Idle Time Exploitation (ITX)
ITX predicts the periods where vehicles are idle and exploits these periods to harvest energy.
It relies on Graph Convolutional Networks and a linear assignment algorithm to devise an optimal pairing of vehicles and charging stations.
arXiv Detail & Related papers (2022-08-30T08:41:40Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Route Optimization via Environment-Aware Deep Network and Reinforcement
Learning [7.063811319445716]
We develop a mobile sequential recommendation system to maximize the profitability of vehicle service providers (e.g., taxi drivers)
A reinforcement-learning framework is proposed to tackle this problem, by integrating a self-check mechanism and a deep neural network for customer pick-up point monitoring.
Based on the yellow taxi data in New York City and vicinity before and after the COVID-19 outbreak, we have conducted comprehensive experiments to evaluate the effectiveness of our method.
arXiv Detail & Related papers (2021-11-16T02:19:13Z) - Model-based Decision Making with Imagination for Autonomous Parking [50.41076449007115]
The proposed algorithm consists of three parts: an imaginative model for anticipating results before parking, an improved rapid-exploring random tree (RRT) and a path smoothing module.
Our algorithm is based on a real kinematic vehicle model; which makes it more suitable for algorithm application on real autonomous cars.
In order to evaluate the algorithm's effectiveness, we have compared our algorithm with traditional RRT, within three different parking scenarios.
arXiv Detail & Related papers (2021-08-25T18:24:34Z) - Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in
Connected and Automated Hybrid Electric Vehicles [3.5259944260228977]
This work proposes a Safe Off-policy Model-Based Reinforcement Learning algorithm for the eco-driving problem.
The proposed algorithm leads to a policy with a higher average speed and a better fuel economy compared to the model-free agent.
arXiv Detail & Related papers (2021-05-25T03:41:29Z) - A Distributed Model-Free Ride-Sharing Approach for Joint Matching,
Pricing, and Dispatching using Deep Reinforcement Learning [32.0512015286512]
We present a dynamic, demand aware, and pricing-based vehicle-passenger matching and route planning framework.
Our framework is validated using the New York City Taxi dataset.
Experimental results show the effectiveness of our approach in real-time and large scale settings.
arXiv Detail & Related papers (2020-10-05T03:13:47Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z) - Reinforcement Learning Based Vehicle-cell Association Algorithm for
Highly Mobile Millimeter Wave Communication [53.47785498477648]
This paper investigates the problem of vehicle-cell association in millimeter wave (mmWave) communication networks.
We first formulate the user state (VU) problem as a discrete non-vehicle association optimization problem.
The proposed solution achieves up to 15% gains in terms sum of user complexity and 20% reduction in VUE compared to several baseline designs.
arXiv Detail & Related papers (2020-01-22T08:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.