Online Dynamic Pricing for Electric Vehicle Charging Stations with Reservations
- URL: http://arxiv.org/abs/2410.05538v2
- Date: Wed, 13 Nov 2024 14:34:10 GMT
- Title: Online Dynamic Pricing for Electric Vehicle Charging Stations with Reservations
- Authors: Jan Mrkos, Antonín Komenda, David Fiedler, Jiří Vokřínek,
- Abstract summary: The transition to electric vehicles (EVs) will significantly impact the electric grid.
Unlike conventional fuel sources, electricity for EVs is constrained by grid capacity, price fluctuations, and long EV charging times.
This paper proposes a model for online dynamic pricing of reserved EV charging services.
- Score: 0.3374875022248865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transition to electric vehicles (EVs), coupled with the rise of renewable energy sources, will significantly impact the electric grid. Unlike conventional fuel sources, electricity for EVs is constrained by grid capacity, price fluctuations, and long EV charging times, requiring new pricing solutions to manage demand and supply. This paper proposes a model for online dynamic pricing of reserved EV charging services, including reservation, parking, and charging as a bundled service priced as a whole. Our approach focuses on the individual charging station operator, employing a stochastic demand model and online dynamic pricing based on expected demand. The proposed model uses a Markov Decision Process (MDP) formulation to optimize sequential pricing decisions for charging session requests. A key contribution is the novel definition and quantification of discretization error introduced by the discretization of the Poisson process for use in the MDP. The model's viability is demonstrated with a heuristic solution method based on Monte-Carlo tree search, offering a viable path for real-world application.
Related papers
- Dynamic Pricing in High-Speed Railways Using Multi-Agent Reinforcement Learning [4.800138615859937]
This paper addresses the challenge of designing effective dynamic pricing strategies in the context of competing and cooperating operators.
A reinforcement learning framework based on a non-zero-sum Markov game is proposed, incorporating random utility models to capture passenger decision making.
arXiv Detail & Related papers (2025-01-14T16:19:25Z) - Multi-agent reinforcement learning strategy to maximize the lifetime of Wireless Rechargeable [0.32634122554913997]
The thesis proposes a generalized charging framework for multiple mobile chargers to maximize the network lifetime.
A multi-point charging model is leveraged to enhance charging efficiency, where the MC can charge multiple sensors simultaneously at each charging location.
The proposal allows reinforcement algorithms to be applied to different networks without requiring extensive retraining.
arXiv Detail & Related papers (2024-11-21T02:18:34Z) - Coherent Hierarchical Probabilistic Forecasting of Electric Vehicle Charging Demand [3.7690784039257292]
This paper studies the forecasting problem of multiple electric vehicle charging stations (EVCSs) in a hierarchical probabilistic manner.
For each charging station, a deep learning model based on a partial input convex neural network (PICNN) is trained to predict the day-ahead charging demand's conditional distribution.
Differentiable convex optimization layers (DCLs) are used to reconcile the scenarios sampled from the distributions to yield coherent scenarios.
arXiv Detail & Related papers (2024-11-01T03:35:04Z) - Learning and Optimization for Price-based Demand Response of Electric Vehicle Charging [0.9124662097191375]
We propose a new decision-focused end-to-end framework for PBDR modeling.
We evaluate the effectiveness of our method on a simulation of charging station operation with synthetic PBDR patterns of EV customers.
arXiv Detail & Related papers (2024-04-16T06:39:30Z) - MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot
Learning [52.101643259906915]
We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations.
Existing model-based offline RL methods are not suitable for offline-to-online fine-tuning in high-dimensional domains.
We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization.
arXiv Detail & Related papers (2024-01-06T21:04:31Z) - A Deep Q-Learning based Smart Scheduling of EVs for Demand Response in
Smart Grids [0.0]
We propose a model-free solution, leveraging Deep Q-Learning to schedule the charging and discharging activities of EVs within a microgrid.
We adapted the Bellman Equation to assess the value of a state based on specific rewards for EV scheduling actions and used a neural network to estimate Q-values for available actions and the epsilon-greedy algorithm to balance exploitation and exploration to meet the target energy profile.
arXiv Detail & Related papers (2024-01-05T06:04:46Z) - Maximum flow-based formulation for the optimal location of electric
vehicle charging stations [2.340830801548167]
We propose a model for the assignment of EV charging demand to stations, framing it as a maximum flow problem.
We showcase our methodology for the city of Montreal, demonstrating the scalability of our approach to handle real-world scenarios.
arXiv Detail & Related papers (2023-12-10T19:49:09Z) - Power Hungry Processing: Watts Driving the Cost of AI Deployment? [74.19749699665216]
generative, multi-purpose AI systems promise a unified approach to building machine learning (ML) models into technology.
This ambition of generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.
We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models.
We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions
arXiv Detail & Related papers (2023-11-28T15:09:36Z) - Charge Manipulation Attacks Against Smart Electric Vehicle Charging Stations and Deep Learning-based Detection Mechanisms [49.37592437398933]
"Smart" electric vehicle charging stations (EVCSs) will be a key step toward achieving green transportation.
We investigate charge manipulation attacks (CMAs) against EV charging, in which an attacker manipulates the information exchanged during smart charging operations.
We propose an unsupervised deep learning-based mechanism to detect CMAs by monitoring the parameters involved in EV charging.
arXiv Detail & Related papers (2023-10-18T18:38:59Z) - COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically
for Model-Based RL [50.385005413810084]
Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration.
$textttCOPlanner$ is a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem.
arXiv Detail & Related papers (2023-10-11T06:10:07Z) - Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model [50.06663781566795]
We consider a dynamic model with the consumers' preferences as well as price sensitivity varying over time.
We measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance.
Our regret analysis results not only demonstrate optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information.
arXiv Detail & Related papers (2023-03-28T00:23:23Z) - Federated Reinforcement Learning for Real-Time Electric Vehicle Charging
and Discharging Control [42.17503767317918]
This paper develops an optimal EV charging/discharging control strategy for different EV users under dynamic environments.
A horizontal federated reinforcement learning (HFRL)-based method is proposed to fit various users' behaviors and dynamic environments.
Simulation results illustrate that the proposed real-time EV charging/discharging control strategy can perform well among various factors.
arXiv Detail & Related papers (2022-10-04T08:22:46Z) - A Deep Reinforcement Learning-Based Charging Scheduling Approach with
Augmented Lagrangian for Electric Vehicle [2.686271754751717]
This paper formulates the EV charging scheduling problem as a constrained Markov decision process (CMDP)
A novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP.
Comprehensive numerical experiments with real-world electricity price demonstrate that our proposed algorithm can achieve high solution optimality and constraints compliance.
arXiv Detail & Related papers (2022-09-20T14:56:51Z) - Research: Modeling Price Elasticity for Occupancy Prediction in Hotel
Dynamic Pricing [13.768319677863259]
We propose a novel hotel demand function that explicitly models the price elasticity of demand for occupancy prediction.
Our model is composed of carefully designed elasticity learning modules to alleviate the endogeneity problem, and trained in a multi-task framework to tackle the data sparseness.
We conduct comprehensive experiments on real-world datasets and validate the superiority of our method over the state-of-the-art baselines for both occupancy prediction and dynamic pricing.
arXiv Detail & Related papers (2022-08-04T13:58:04Z) - Optimized cost function for demand response coordination of multiple EV
charging stations using reinforcement learning [6.37470346908743]
We build on previous research on RL, based on a Markov decision process (MDP) to simultaneously coordinate multiple charging stations.
We propose an improved cost function that essentially forces the learned control policy to always fulfill any charging demand that does not offer flexibility.
We rigorously compare the newly proposed batch RL fitted Q-iteration implementation with the original (costly) one, using real-world data.
arXiv Detail & Related papers (2022-03-03T11:22:27Z) - Autoregressive Dynamics Models for Offline Policy Evaluation and
Optimization [60.73540999409032]
We show that expressive autoregressive dynamics models generate different dimensions of the next state and reward sequentially conditioned on previous dimensions.
We also show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer.
arXiv Detail & Related papers (2021-04-28T16:48:44Z) - Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic [59.94347858883343]
This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP)
The novelty is to design an embedded product MDP (EP-MDP) between the LDGBA and the MDP.
The proposed LDGBA-based reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states.
arXiv Detail & Related papers (2021-02-24T01:11:25Z) - Hybrid Modelling Approaches for Forecasting Energy Spot Prices in EPEC
market [62.997667081978825]
We consider several hybrid modelling approaches for forecasting energy spot prices in EPEC market.
Data was given in terms of electricity prices for 2013-2014 years, and test data as a year of 2015.
arXiv Detail & Related papers (2020-10-14T12:45:53Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.