Partitioned Active Learning for Heterogeneous Systems
- URL: http://arxiv.org/abs/2105.08547v1
- Date: Fri, 14 May 2021 02:05:31 GMT
- Title: Partitioned Active Learning for Heterogeneous Systems
- Authors: Cheolhei Lee, Kaiwen Wang, Jianguo Wu, Wenjun Cai, and Xiaowei Yue
- Abstract summary: We propose the partitioned active learning strategy established upon partitioned GP (PGP) modeling.
Global searching scheme accelerates the exploration aspect of active learning.
Local searching exploits the active learning criterion induced by the local GP model.
- Score: 5.331649110169476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cost-effective and high-precision surrogate modeling is a cornerstone of
automated industrial and engineering systems. Active learning coupled with
Gaussian process (GP) surrogate modeling is an indispensable tool for demanding
and complex systems, while the existence of heterogeneity in underlying systems
may adversely affect the modeling process. In order to improve the learning
efficiency under the regime, we propose the partitioned active learning
strategy established upon partitioned GP (PGP) modeling. Our strategy seeks the
most informative design point for PGP modeling systematically in twosteps. The
global searching scheme accelerates the exploration aspect of active learning
by investigating the most uncertain design space, and the local searching
exploits the active learning criterion induced by the local GP model. We also
provide numerical remedies to alleviate the computational cost of active
learning, thereby allowing the proposed method to incorporate a large amount of
candidates. The proposed method is applied to numerical simulation and real
world cases endowed with heterogeneities in which surrogate models are
constructed to embed in (i) the cost-efficient automatic fuselage shape control
system; and (ii) the optimal design system of tribocorrosion-resistant alloys.
The results show that our approach outperforms benchmark methods.
Related papers
- Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Active Learning for Control-Oriented Identification of Nonlinear Systems [26.231260751633307]
We present the first finite sample analysis of an active learning algorithm suitable for a general class of nonlinear dynamics.
In certain settings, the excess control cost of our algorithm achieves the optimal rate, up to logarithmic factors.
We validate our approach in simulation, showcasing the advantage of active, control-oriented exploration for controlling nonlinear systems.
arXiv Detail & Related papers (2024-04-13T15:40:39Z) - FLEX: an Adaptive Exploration Algorithm for Nonlinear Systems [6.612035830987298]
We introduce FLEX, an exploration algorithm for nonlinear dynamics based on optimal experimental design.
Our policy maximizes the information of the next step and results in an adaptive exploration algorithm.
The performance achieved by FLEX is competitive and its computational cost is low.
arXiv Detail & Related papers (2023-04-26T10:20:55Z) - Active Learning of Piecewise Gaussian Process Surrogates [2.5399204134718096]
We develop a method for active learning of piecewise, Jump GP surrogates.
Jump GPs are continuous within, but discontinuous across, regions of a design space.
We develop an estimator for bias and variance of Jump GP models.
arXiv Detail & Related papers (2023-01-20T20:25:50Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Active Learning of Markov Decision Processes using Baum-Welch algorithm
(Extended) [0.0]
This paper revisits and adapts the classic Baum-Welch algorithm for learning Markov decision processes and Markov chains.
We empirically compare our approach with state-of-the-art tools and demonstrate that the proposed active learning procedure can significantly reduce the number of observations required to obtain accurate models.
arXiv Detail & Related papers (2021-10-06T18:54:19Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Model-free Representation Learning and Exploration in Low-rank MDPs [64.72023662543363]
We present the first model-free representation learning algorithms for low rank MDPs.
Key algorithmic contribution is a new minimax representation learning objective.
Result can accommodate general function approximation to scale to complex environments.
arXiv Detail & Related papers (2021-02-14T00:06:54Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.