Data-Driven Online Interactive Bidding Strategy for Demand Response
- URL: http://arxiv.org/abs/2202.04236v1
- Date: Wed, 9 Feb 2022 02:44:20 GMT
- Title: Data-Driven Online Interactive Bidding Strategy for Demand Response
- Authors: Kuan-Cheng Lee, Hong-Tzer Yang, and Wenjun Tang
- Abstract summary: Demand response (DR) provides the services of peak shaving, enhancing the efficiency of renewable energy utilization with a short response period, and low cost.
Various categories of DR are established, e.g. automated DR, incentive DR, emergency DR, and demand bidding.
This paper determines the bidding and purchasing strategy simultaneously employing the smart meter data and functions.
The results prove that when facing diverse situations the proposed model can earn the optimal profit via off/online learning the bidding rules and robustly making the proper bid.
- Score: 0.30586855806896046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Demand response (DR), as one of the important energy resources in the
future's grid, provides the services of peak shaving, enhancing the efficiency
of renewable energy utilization with a short response period, and low cost.
Various categories of DR are established, e.g. automated DR, incentive DR,
emergency DR, and demand bidding. However, with the practical issue of the
unawareness of residential and commercial consumers' utility models, the
researches about demand bidding aggregator involved in the electricity market
are just at the beginning stage. For this issue, the bidding price and bidding
quantity are two required decision variables while considering the
uncertainties due to the market and participants. In this paper, we determine
the bidding and purchasing strategy simultaneously employing the smart meter
data and functions. A two-agent deep deterministic policy gradient method is
developed to optimize the decisions through learning historical bidding
experiences. The online learning further utilizes the daily newest bidding
experience attained to ensure trend tracing and self-adaptation. Two
environment simulators are adopted for testifying the robustness of the model.
The results prove that when facing diverse situations the proposed model can
earn the optimal profit via off/online learning the bidding rules and robustly
making the proper bid.
Related papers
- Dynamic Pricing for Electric Vehicle Charging [6.1003048508889535]
We develop a novel formulation for the dynamic pricing problem by addressing multiple conflicting objectives efficiently.
A dynamic pricing model quantifies the relationship between demand and price while simultaneously solving multiple conflicting objectives.
Two California charging sites' real-world data validates our approach.
arXiv Detail & Related papers (2024-08-26T10:32:21Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - Insurance pricing on price comparison websites via reinforcement
learning [7.023335262537794]
This paper introduces reinforcement learning framework that learns optimal pricing policy by integrating model-based and model-free methods.
The paper also highlights the importance of evaluating pricing policies using an offline dataset in a consistent fashion.
arXiv Detail & Related papers (2023-08-14T04:44:56Z) - Equitable Time-Varying Pricing Tariff Design: A Joint Learning and
Optimization Approach [0.0]
Time-varying pricing tariffs incentivize consumers to shift their electricity demand and reduce costs, but may increase the energy burden for consumers with limited response capability.
This paper proposes a joint learning-based identification and optimization method to design equitable time-varying tariffs.
arXiv Detail & Related papers (2023-07-26T20:14:23Z) - Dual policy as self-model for planning [71.73710074424511]
We refer to the model used to simulate one's decisions as the agent's self-model.
Inspired by current reinforcement learning approaches and neuroscience, we explore the benefits and limitations of using a distilled policy network as the self-model.
arXiv Detail & Related papers (2023-06-07T13:58:45Z) - HireVAE: An Online and Adaptive Factor Model Based on Hierarchical and
Regime-Switch VAE [113.47287249524008]
It is still an open question to build a factor model that can conduct stock prediction in an online and adaptive setting.
We propose the first deep learning based online and adaptive factor model, HireVAE, at the core of which is a hierarchical latent space that embeds the relationship between the market situation and stock-wise latent factors.
Across four commonly used real stock market benchmarks, the proposed HireVAE demonstrate superior performance in terms of active returns over previous methods.
arXiv Detail & Related papers (2023-06-05T12:58:13Z) - Adaptive Risk-Aware Bidding with Budget Constraint in Display
Advertising [47.14651340748015]
We propose a novel adaptive risk-aware bidding algorithm with budget constraint via reinforcement learning.
We theoretically unveil the intrinsic relation between the uncertainty and the risk tendency based on value at risk (VaR)
arXiv Detail & Related papers (2022-12-06T18:50:09Z) - An Artificial Intelligence Framework for Bidding Optimization with
Uncertainty inMultiple Frequency Reserve Markets [0.32622301272834525]
Frequency reserves are resources that adjust power production or consumption in real time to react to a power grid frequency deviation.
We propose three bidding strategies to capitalise on price peaks in multi-stage markets.
We also propose an AI-based bidding optimization framework that implements these three strategies.
arXiv Detail & Related papers (2021-04-05T12:04:29Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z) - MoTiAC: Multi-Objective Actor-Critics for Real-Time Bidding [47.555870679348416]
We propose a Multi-ecTive Actor-Critics algorithm named MoTiAC for the problem of bidding optimization with various goals.
Unlike previous RL models, the proposed MoTiAC can simultaneously fulfill multi-objective tasks in complicated bidding environments.
arXiv Detail & Related papers (2020-02-18T07:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.