VEGA: Electric Vehicle Navigation Agent via Physics-Informed Neural Operator and Proximal Policy Optimization
- URL: http://arxiv.org/abs/2509.13386v1
- Date: Tue, 16 Sep 2025 11:27:50 GMT
- Title: VEGA: Electric Vehicle Navigation Agent via Physics-Informed Neural Operator and Proximal Policy Optimization
- Authors: Hansol Lim, Minhyeok Im, Jonathan Boyack, Jee Won Lee, Jongseong Brad Choi,
- Abstract summary: We present VEGA, a charge-aware EV navigation agent that plans over a charger-annotated road graph.<n>VEGA consists of two modules. First, a physics-informed neural operator (PINO), trained on real vehicle speed and battery-power logs.<n>Second, a Reinforcement Learning (RL) agent uses these dynamics to optimize a path with optimal charging stops and dwell times.
- Score: 1.4524691281000133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Demands for software-defined vehicles (SDV) are rising and electric vehicles (EVs) are increasingly being equipped with powerful computers. This enables onboard AI systems to optimize charge-aware path optimization customized to reflect vehicle's current condition and environment. We present VEGA, a charge-aware EV navigation agent that plans over a charger-annotated road graph using Proximal Policy Optimization (PPO) with budgeted A* teacher-student guidance under state-of-charge (SoC) feasibility. VEGA consists of two modules. First, a physics-informed neural operator (PINO), trained on real vehicle speed and battery-power logs, uses recent vehicle speed logs to estimate aerodynamic drag, rolling resistance, mass, motor and regenerative-braking efficiencies, and auxiliary load by learning a vehicle-custom dynamics. Second, a Reinforcement Learning (RL) agent uses these dynamics to optimize a path with optimal charging stops and dwell times under SoC constraints. VEGA requires no additional sensors and uses only vehicle speed signals. It may serve as a virtual sensor for power and efficiency to potentially reduce EV cost. In evaluation on long routes like San Francisco to New York, VEGA's stops, dwell times, SoC management, and total travel time closely track Tesla Trip Planner while being slightly more conservative, presumably due to real vehicle conditions such as vehicle parameter drift due to deterioration. Although trained only in U.S. regions, VEGA was able to compute optimal charge-aware paths in France and Japan, demonstrating generalizability. It achieves practical integration of physics-informed learning and RL for EV eco-routing.
Related papers
- Overtake Detection in Trucks Using CAN Bus Signals: A Comparative Study of Machine Learning Methods [51.28632782308621]
We focus on overtake detection using Controller Area Network (CAN) bus data collected from five in-service trucks provided by the Volvo Group.<n>We evaluate three common classifiers for vehicle manoeuvre detection, Artificial Neural Networks (ANN), Random Forest (RF), and Support Vector Machines (SVM)<n>Our pertruck analysis also reveals that classification accuracy, especially for overtakes, depends on the amount of training data per vehicle.
arXiv Detail & Related papers (2025-07-01T09:20:41Z) - Smart Ride and Delivery Services with Electric Vehicles: Leveraging Bidirectional Charging for Profit Optimisation [8.899491864225464]
We introduce the Electric Vehicle Orienteering Problem with V2G (EVOP-V2G)<n>This involves navigating dynamic electricity prices, charging station selection, and route constraints.<n>We propose two near-optimal metaheuristic algorithms: one evolutionary (EA) and the other based on large neighborhood search (LNS)
arXiv Detail & Related papers (2025-06-25T13:15:52Z) - Green vehicle routing problem that jointly optimizes delivery speed and routing based on the characteristics of electric vehicles [0.0]
This paper establishes an energy consumption model using real electric vehicles.
The energy consumption model also incorporates the effects of vehicle start/stop, speed, distance, and load on energy consumption.
An improved Adaptive Genetic Algorithm is used to solve for the most energy-efficient route.
arXiv Detail & Related papers (2024-10-04T08:08:15Z) - GARLIC: GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [81.82487256783674]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.<n>This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - Deep Learning-Based Vehicle Speed Prediction for Ecological Adaptive
Cruise Control in Urban and Highway Scenarios [0.5161531917413706]
In a typical car-following scenario, target vehicle speed fluctuations act as an external disturbance to the host vehicle and in turn affect its energy consumption.
Deep recurrent neural network-based vehicle speed prediction using long-short term memory (LSTM) and gated recurrent units (GRU) is studied in this work.
The proposed speed prediction models are evaluated for long-term predictions (up to 10 s) of target vehicle future velocities.
arXiv Detail & Related papers (2022-11-30T22:50:43Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Motion Planning and Control for Multi Vehicle Autonomous Racing at High
Speeds [100.61456258283245]
This paper presents a multi-layer motion planning and control architecture for autonomous racing.
The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 $m/s2$.
arXiv Detail & Related papers (2022-07-22T15:16:54Z) - An Intelligent Self-driving Truck System For Highway Transportation [81.12838700312308]
In this paper, we introduce an intelligent self-driving truck system.
Our presented system consists of three main components, 1) a realistic traffic simulation module for generating realistic traffic flow in testing scenarios, 2) a high-fidelity truck model which is designed and evaluated for mimicking real truck response in real-world deployment.
We also deploy our proposed system on a real truck and conduct real world experiments which shows our system's capacity of mitigating sim-to-real gap.
arXiv Detail & Related papers (2021-12-31T04:54:13Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - A Physics Model-Guided Online Bayesian Framework for Energy Management
of Extended Range Electric Delivery Vehicles [3.927161292818792]
This paper improves an in-use rule-based EMS that is used in a delivery vehicle fleet equipped with two-way vehicle-to-cloud connectivity.
A physics model-guided online Bayesian framework is described and validated on large number of in-use driving samples of EREVs used for last-mile package delivery.
Results show an average of 12.8% fuel use reduction among tested vehicles for 155 real delivery trips.
arXiv Detail & Related papers (2020-06-01T08:43:23Z) - End-to-End Vision-Based Adaptive Cruise Control (ACC) Using Deep
Reinforcement Learning [12.100265694989627]
This paper presented a deep reinforcement learning method named Double Deep Q-networks to design an end-to-end vision-based adaptive cruise control (ACC) system.
Well-designed reward functions associated with the following distance and throttle/brake force were implemented in the reinforcement learning model.
The proposed system can be well adaptive to different speed trajectories of the preceding vehicle and operated in real-time.
arXiv Detail & Related papers (2020-01-24T20:02:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.