Data-Driven Learning and Load Ensemble Control
- URL: http://arxiv.org/abs/2004.09675v1
- Date: Mon, 20 Apr 2020 23:32:10 GMT
- Title: Data-Driven Learning and Load Ensemble Control
- Authors: Ali Hassan, Deepjyoti Deka, Michael Chertkov and Yury Dvorkin
- Abstract summary: This study aims to engage distributed small-scale flexible loads, such as thermostatically controllable loads (TCLs) to provide grid support services.
The efficiency of this data-driven learning is demonstrated through simulations on Heating, Cooling & Ventilation units in a testbed neighborhood of residential houses.
- Score: 1.647866856596524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Demand response (DR) programs aim to engage distributed small-scale flexible
loads, such as thermostatically controllable loads (TCLs), to provide various
grid support services. Linearly Solvable Markov Decision Process (LS-MDP), a
variant of the traditional MDP, is used to model aggregated TCLs. Then, a
model-free reinforcement learning technique called Z-learning is applied to
learn the value function and derive the optimal policy for the DR aggregator to
control TCLs. The learning process is robust against uncertainty that arises
from estimating the passive dynamics of the aggregated TCLs. The efficiency of
this data-driven learning is demonstrated through simulations on Heating,
Cooling & Ventilation (HVAC) units in a testbed neighborhood of residential
houses.
Related papers
- Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models [79.41139393080736]
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities.
We propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning.
arXiv Detail & Related papers (2024-09-30T10:48:20Z) - Self-Expansion of Pre-trained Models with Mixture of Adapters for Continual Learning [21.19820308728003]
Continual learning (CL) aims to continually accumulate knowledge from a non-stationary data stream without catastrophic forgetting of learned knowledge.
Current PTM-based CL methods perform effective continual adaptation on downstream tasks by adding learnable adapters or prompts upon the frozen PTMs.
We propose Self-Expansion of pre-trained models with Modularized Adaptation (SEMA), a novel approach to enhance the control of stability-plasticity balance in PTM-based CL.
arXiv Detail & Related papers (2024-03-27T17:59:21Z) - An LLM-Based Digital Twin for Optimizing Human-in-the Loop Systems [13.388869442538399]
We present a case study that employs large language models (LLMs) to mimic the behaviors and thermal preferences of various population groups in a shopping mall.
The aggregated thermal preferences are integrated into an agent-in-the-loop based reinforcement learning algorithm AitL-RL.
Our results show that LLMs are capable of simulating complex population movements within large open spaces.
arXiv Detail & Related papers (2024-03-25T14:32:28Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Deep Reinforcement Learning for Computational Fluid Dynamics on HPC
Systems [17.10464381844892]
Reinforcement learning (RL) is highly suitable for devising control strategies in the context of dynamical systems.
Recent research results indicate that RL-augmented computational fluid dynamics (CFD) solvers can exceed the current state of the art.
We present Relexi as a scalable RL framework that bridges the gap between machine learning and modern CFD solvers on HPC systems.
arXiv Detail & Related papers (2022-05-13T08:21:18Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Transferring Reinforcement Learning for DC-DC Buck Converter Control via
Duty Ratio Mapping: From Simulation to Implementation [0.0]
This paper presents a transferring methodology via a delicately designed duty ratio mapping (DRM) for a DC-DC buck converter.
A detailed sim-to-real process is presented to enable the implementation of a model-free deep reinforcement learning (DRL) controller.
arXiv Detail & Related papers (2021-10-20T11:08:17Z) - Efficient Transformers in Reinforcement Learning using Actor-Learner
Distillation [91.05073136215886]
"Actor-Learner Distillation" transfers learning progress from a large capacity learner model to a small capacity actor model.
We demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model.
arXiv Detail & Related papers (2021-04-04T17:56:34Z) - Reinforcement Learning for Thermostatically Controlled Loads Control
using Modelica and Python [0.0]
The project aims to investigate and assess opportunities for applying reinforcement learning (RL) for power system control.
As a proof of concept (PoC), voltage control of thermostatically controlled loads (TCLs) for power consumption was developed using Modelica-based pipeline.
The paper shows the influence of Q-learning parameters, including discretization of state-action space, on the controller performance.
arXiv Detail & Related papers (2020-05-09T13:35:49Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.