DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities
- URL: http://arxiv.org/abs/2305.11195v2
- Date: Tue, 22 Aug 2023 15:35:25 GMT
- Title: DClEVerNet: Deep Combinatorial Learning for Efficient EV Charging
Scheduling in Large-scale Networked Facilities
- Authors: Bushra Alshehhi, Areg Karapetyan, Khaled Elbassioni, Sid Chi-Kin Chau,
and Majid Khonji
- Abstract summary: Electric vehicles (EVs) might stress distribution networks significantly, leaving their performance degraded and jeopardized stability.
Modern power grids require coordinated or smart'' charging strategies capable of optimizing EV charging scheduling in a scalable and efficient fashion.
We formulate a time-coupled binary optimization problem that maximizes EV users' total welfare gain while accounting for the network's available power capacity and stations' occupancy limits.
- Score: 5.78463306498655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the electrification of transportation, the rising uptake of electric
vehicles (EVs) might stress distribution networks significantly, leaving their
performance degraded and stability jeopardized. To accommodate these new loads
cost-effectively, modern power grids require coordinated or ``smart'' charging
strategies capable of optimizing EV charging scheduling in a scalable and
efficient fashion. With this in view, the present work focuses on reservation
management programs for large-scale, networked EV charging stations. We
formulate a time-coupled binary optimization problem that maximizes EV users'
total welfare gain while accounting for the network's available power capacity
and stations' occupancy limits. To tackle the problem at scale while retaining
high solution quality, a data-driven optimization framework combining
techniques from the fields of Deep Learning and Approximation Algorithms is
introduced. The framework's key ingredient is a novel input-output processing
scheme for neural networks that allows direct extrapolation to problem sizes
substantially larger than those included in the training set. Extensive
numerical simulations based on synthetic and real-world data traces verify the
effectiveness and superiority of the presented approach over two representative
scheduling algorithms. Lastly, we round up the contributions by listing several
immediate extensions to the proposed framework and outlining the prospects for
further exploration.
Related papers
- DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Energy-Efficient Federated Edge Learning with Streaming Data: A Lyapunov Optimization Approach [34.00679567444125]
We develop a dynamic scheduling and resource allocation algorithm to address the inherent randomness in data arrivals and resource availability under long-term energy constraints.
Our proposed algorithm makes adaptive decisions on device scheduling, computational capacity adjustment, and allocation of bandwidth and transmit power in every round.
The effectiveness of our scheme is verified through simulation results, demonstrating improved learning performance and energy efficiency as compared to baseline schemes.
arXiv Detail & Related papers (2024-05-20T14:13:22Z) - Machine Learning for Scalable and Optimal Load Shedding Under Power System Contingency [6.201026565902282]
An optimal load shedding (OLS) accounting for network limits has the potential to address the diverse system-wide impacts of contingency scenarios.
We propose a decentralized design that leverages offline training of a neural network (NN) model for individual load centers to autonomously construct the OLS solutions.
Our learning-for-OLS approach can greatly reduce the computation and communication needs during online emergency responses.
arXiv Detail & Related papers (2024-05-09T03:19:20Z) - Safety-Aware Reinforcement Learning for Electric Vehicle Charging Station Management in Distribution Network [4.842172685255376]
Electric vehicles (EVs) pose a significant risk to the distribution system operation in the absence of coordination.
This paper presents a safety-aware reinforcement learning (RL) algorithm designed to manage EV charging stations.
Our proposed algorithm does not rely on explicit penalties for constraint violations, eliminating the need for penalty tuning coefficient.
arXiv Detail & Related papers (2024-03-20T01:57:38Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Scalable Learning for Optimal Load Shedding Under Power Grid Emergency
Operations [4.922268203017287]
This work puts forth an innovative learning-for-OLS approach by constructing the optimal decision rules of load shedding under a variety of potential contingency scenarios.
The proposed NN-based OLS decisions are fully decentralized, enabling individual load centers to quickly react to the specific contingency.
Numerical studies on the IEEE 14-bus system have demonstrated the effectiveness of our scalable OLS design for real-time responses to severe grid emergency events.
arXiv Detail & Related papers (2021-11-23T16:14:58Z) - A Deep Value-network Based Approach for Multi-Driver Order Dispatching [55.36656442934531]
We propose a deep reinforcement learning based solution for order dispatching.
We conduct large scale online A/B tests on DiDi's ride-dispatching platform.
Results show that CVNet consistently outperforms other recently proposed dispatching methods.
arXiv Detail & Related papers (2021-06-08T16:27:04Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.