Incentivising Demand Side Response through Discount Scheduling using Hybrid Quantum Optimization
- URL: http://arxiv.org/abs/2309.05502v2
- Date: Wed, 29 May 2024 12:52:44 GMT
- Title: Incentivising Demand Side Response through Discount Scheduling using Hybrid Quantum Optimization
- Authors: David Bucher, Jonas Nüßlein, Corey O'Meara, Ivan Angelov, Benedikt Wimmer, Kumar Ghosh, Giorgio Cortiana, Claudia Linnhoff-Popien,
- Abstract summary: Demand Side Response (DSR) is a strategy that enables consumers to actively participate in managing electricity demand.
We propose a hybrid quantum computing approach, using D-Wave's Leap Hybrid Cloud runtime.
We observe that the classical decomposition method obtains the best overall newpsolution quality for problem sizes up to 3200 consumers.
- Score: 3.6021182997326022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Demand Side Response (DSR) is a strategy that enables consumers to actively participate in managing electricity demand. It aims to alleviate strain on the grid during high demand and promote a more balanced and efficient use of (renewable) electricity resources. We implement DSR through discount scheduling, which involves offering discrete price incentives to consumers to adjust their electricity consumption patterns to times when their local energy mix consists of more renewable energy. Since we tailor the discounts to individual customers' consumption, the Discount Scheduling Problem (DSP) becomes a large combinatorial optimization task. Consequently, we adopt a hybrid quantum computing approach, using D-Wave's Leap Hybrid Cloud. We benchmark Leap against Gurobi, a classical Mixed Integer optimizer in terms of solution quality at fixed runtime and fairness in terms of discount allocation. Furthermore, we propose a large-scale decomposition algorithm/heuristic for the DSP, applied with either quantum or classical computers running the subroutines, which significantly reduces the problem size while maintaining solution quality. Using synthetic data generated from real-world data, we observe that the classical decomposition method obtains the best overall \newp{solution quality for problem sizes up to 3200 consumers, however, the hybrid quantum approach provides more evenly distributed discounts across consumers.
Related papers
- Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Equitable Time-Varying Pricing Tariff Design: A Joint Learning and
Optimization Approach [0.0]
Time-varying pricing tariffs incentivize consumers to shift their electricity demand and reduce costs, but may increase the energy burden for consumers with limited response capability.
This paper proposes a joint learning-based identification and optimization method to design equitable time-varying tariffs.
arXiv Detail & Related papers (2023-07-26T20:14:23Z) - Elastic Entangled Pair and Qubit Resource Management in Quantum Cloud
Computing [73.7522199491117]
Quantum cloud computing (QCC) offers a promising approach to efficiently provide quantum computing resources.
The fluctuations in user demand and quantum circuit requirements are challenging for efficient resource provisioning.
We propose a resource allocation model to provision quantum computing and networking resources.
arXiv Detail & Related papers (2023-07-25T00:38:46Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Targeted demand response for flexible energy communities using
clustering techniques [2.572906392867547]
The goal is to alter the consumption behavior of the prosumers within a distributed energy community in Italy.
Three popular machine learning algorithms are employed, namely k-means, k-medoids and agglomerative clustering.
We evaluate the methods using multiple metrics including a novel metric proposed within this study, namely peak performance score (PPS)
arXiv Detail & Related papers (2023-03-01T02:29:30Z) - Power network optimization: a quantum approach [0.0]
We show how to optimize transmission power networks with quantum annealing.
First, we define the QUBO problem for the partitioning of the network, and test the implementation on purely quantum and hybrid architectures.
We then solve the problem on the D-Wave hybrid CQM and BQM solvers, as well as on classical solvers available on Azure Quantum cloud.
arXiv Detail & Related papers (2022-12-03T14:49:09Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Energy Efficient Edge Computing: When Lyapunov Meets Distributed
Reinforcement Learning [12.845204986571053]
In this work, we study the problem of energy-efficient offloading enabled by edge computing.
In the considered scenario, multiple users simultaneously compete for radio and edge computing resources.
The proposed solution also allows to increase the network's energy efficiency compared to a benchmark approach.
arXiv Detail & Related papers (2021-03-31T11:02:29Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.