RL-Controller: a reinforcement learning framework for active structural
control
- URL: http://arxiv.org/abs/2103.07616v1
- Date: Sat, 13 Mar 2021 04:42:13 GMT
- Title: RL-Controller: a reinforcement learning framework for active structural
control
- Authors: Soheila Sadeghi Eshkevari, Soheil Sadeghi Eshkevari, Debarshi Sen,
Shamim N. Pakzad
- Abstract summary: We present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment.
We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts.
In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To maintain structural integrity and functionality during the designed life
cycle of a structure, engineers are expected to accommodate for natural hazards
as well as operational load levels. Active control systems are an efficient
solution for structural response control when a structure is subjected to
unexpected extreme loads. However, development of these systems through
traditional means is limited by their model dependent nature. Recent
advancements in adaptive learning methods, in particular, reinforcement
learning (RL), for real-time decision making problems, along with rapid growth
in high-performance computational resources, help structural engineers to
transform the classic model-based active control problem to a purely
data-driven one. In this paper, we present a novel RL-based approach for
designing active controllers by introducing RL-Controller, a flexible and
scalable simulation environment. The RL-Controller includes attributes and
functionalities that are defined to model active structural control mechanisms
in detail. We show that the proposed framework is easily trainable for a five
story benchmark building with 65% reductions on average in inter story drifts
(ISD) when subjected to strong ground motions. In a comparative study with LQG
active control method, we demonstrate that the proposed model-free algorithm
learns more optimal actuator forcing strategies that yield higher performance,
e.g., 25% more ISD reductions on average with respect to LQG, without using
prior information about the mechanical properties of the system.
Related papers
- Traffic expertise meets residual RL: Knowledge-informed model-based residual reinforcement learning for CAV trajectory control [1.5361702135159845]
This paper introduces a knowledge-informed model-based residual reinforcement learning framework.
It integrates traffic expert knowledge into a virtual environment model, employing the Intelligent Driver Model (IDM) for basic dynamics and neural networks for residual dynamics.
We propose a novel strategy that combines traditional control methods with residual RL, facilitating efficient learning and policy optimization without the need to learn from scratch.
arXiv Detail & Related papers (2024-08-30T16:16:57Z) - RL + Model-based Control: Using On-demand Optimal Control to Learn Versatile Legged Locomotion [16.800984476447624]
This paper presents a control framework that combines model-based optimal control and reinforcement learning.
We validate the robustness and controllability of the framework through a series of experiments.
Our framework effortlessly supports the training of control policies for robots with diverse dimensions.
arXiv Detail & Related papers (2023-05-29T01:33:55Z) - A Dynamic Feedforward Control Strategy for Energy-efficient Building
System Operation [59.56144813928478]
In current control strategies and optimization algorithms, most of them rely on receiving information from real-time feedback.
We propose an engineer-friendly control strategy framework that embeds dynamic prior knowledge from building system characteristics simultaneously for system control.
We tested it in a case for heating system control with typical control strategies, which shows our framework owns a further energy-saving potential of 15%.
arXiv Detail & Related papers (2023-01-23T09:07:07Z) - Structure-Enhanced DRL for Optimal Transmission Scheduling [43.801422320012286]
This paper focuses on the transmission scheduling problem of a remote estimation system.
We develop a structure-enhanced deep reinforcement learning framework for optimal scheduling of the system.
In particular, we propose a structure-enhanced action selection method, which tends to select actions that obey the policy structure.
arXiv Detail & Related papers (2022-12-24T10:18:38Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Imposing Robust Structured Control Constraint on Reinforcement Learning
of Linear Quadratic Regulator [0.0]
This paper presents a design for any generic structure, paving the way for distributed learning control.
The ideas from reinforcement learning (RL) in conjunction with control-theoretic sufficient stability and performance guarantees are used to develop the methodology.
We validate our theoretical results with a simulation on a multi-agent network with 6 agents.
arXiv Detail & Related papers (2020-11-12T00:31:39Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.