Active learning for structural reliability analysis with multiple limit
state functions through variance-enhanced PC-Kriging surrogate models
- URL: http://arxiv.org/abs/2302.12074v1
- Date: Thu, 23 Feb 2023 15:01:06 GMT
- Title: Active learning for structural reliability analysis with multiple limit
state functions through variance-enhanced PC-Kriging surrogate models
- Authors: J. Moran A., P.G. Morato and P. Rigo
- Abstract summary: Existing active strategies for training surrogate models yield accurate structural reliability estimates.
We investigate the capability of active learning approaches for efficiently selecting training samples under a limited computational budget.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing active strategies for training surrogate models yield accurate
structural reliability estimates by aiming at design space regions in the
vicinity of a specified limit state function. In many practical engineering
applications, various damage conditions, e.g. repair, failure, should be
probabilistically characterized, thus demanding the estimation of multiple
performance functions. In this work, we investigate the capability of active
learning approaches for efficiently selecting training samples under a limited
computational budget while still preserving the accuracy associated with
multiple surrogated limit states. Specifically, PC-Kriging-based surrogate
models are actively trained considering a variance correction derived from
leave-one-out cross-validation error information, whereas the sequential
learning scheme relies on U-function-derived metrics. The proposed active
learning approaches are tested in a highly nonlinear structural reliability
setting, whereas in a more practical application, failure and repair events are
stochastically predicted in the aftermath of a ship collision against an
offshore wind substructure. The results show that a balanced computational
budget administration can be effectively achieved by successively targeting the
specified multiple limit state functions within a unified active learning
scheme.
Related papers
- ICL-TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models [103.45785408116146]
Continual learning (CL) aims to train a model that can solve multiple tasks presented sequentially.
Recent CL approaches have achieved strong performance by leveraging large pre-trained models that generalize well to downstream tasks.
However, such methods lack theoretical guarantees, making them prone to unexpected failures.
We bridge this gap by integrating an empirically strong approach into a principled framework, designed to prevent forgetting.
arXiv Detail & Related papers (2024-10-01T12:58:37Z) - Impacts of floating-point non-associativity on reproducibility for HPC and deep learning applications [0.0]
Run to run variability in parallel programs caused by floating-point non-associativity has been known to significantly affect algorithms.
We investigate the statistical properties of floating-point non-associativity within modern parallel programming models.
We examine the recently-added deterministic options in PyTorch within the context of GPU deployment for deep learning.
arXiv Detail & Related papers (2024-08-09T16:07:37Z) - LTAU-FF: Loss Trajectory Analysis for Uncertainty in Atomistic Force Fields [5.396675151318325]
Model ensembles are effective tools for estimating prediction uncertainty in deep learning atomistic force fields.
However, their widespread adoption is hindered by high computational costs and overconfident error estimates.
We address these challenges by leveraging distributions of per-sample errors obtained during training and employing a distance-based similarity search in the model latent space.
Our method, which we call LTAU, efficiently estimates the full probability distribution function (PDF) of errors for any test point using the logged training errors.
arXiv Detail & Related papers (2024-02-01T18:50:42Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - Efficiently Controlling Multiple Risks with Pareto Testing [34.83506056862348]
We propose a two-stage process which combines multi-objective optimization with multiple hypothesis testing.
We demonstrate the effectiveness of our approach to reliably accelerate the execution of large-scale Transformer models in natural language processing (NLP) applications.
arXiv Detail & Related papers (2022-10-14T15:54:39Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Failure-averse Active Learning for Physics-constrained Systems [7.701064815584088]
We develop a novel active learning method that avoids failures considering implicit physics constraints that govern the system.
The proposed approach is driven by two tasks: the safe variance reduction explores the safe region to reduce the variance of the target model, and the safe region expansion aims to extend the explorable region exploiting the probabilistic model of constraints.
The method is applied to the composite fuselage assembly process with consideration of material failure using the Tsai-wu criterion, and it is able to achieve zero-failure without the knowledge of explicit failure regions.
arXiv Detail & Related papers (2021-10-27T14:01:03Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Active Learning for Sequence Tagging with Deep Pre-trained Models and
Bayesian Uncertainty Estimates [52.164757178369804]
Recent advances in transfer learning for natural language processing in conjunction with active learning open the possibility to significantly reduce the necessary annotation budget.
We conduct an empirical study of various Bayesian uncertainty estimation methods and Monte Carlo dropout options for deep pre-trained models in the active learning framework.
We also demonstrate that to acquire instances during active learning, a full-size Transformer can be substituted with a distilled version, which yields better computational performance.
arXiv Detail & Related papers (2021-01-20T13:59:25Z) - Scalable Uncertainty for Computer Vision with Functional Variational
Inference [18.492485304537134]
We leverage the formulation of variational inference in function space.
We obtain predictive uncertainty estimates at the cost of a single forward pass through any chosen CNN architecture.
We propose numerically efficient algorithms which enable fast training in the context of high-dimensional tasks.
arXiv Detail & Related papers (2020-03-06T19:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.