Parameterized Consistency Learning-based Deep Polynomial Chaos Neural
Network Method for Reliability Analysis in Aerospace Engineering
- URL: http://arxiv.org/abs/2203.15655v1
- Date: Tue, 29 Mar 2022 15:15:12 GMT
- Title: Parameterized Consistency Learning-based Deep Polynomial Chaos Neural
Network Method for Reliability Analysis in Aerospace Engineering
- Authors: Xiaohu Zheng, Wen Yao, Yunyang Zhang, Xiaoya Zhang
- Abstract summary: Polynomial chaos expansion (PCE) is a powerful surrogate model reliability analysis method in aerospace engineering.
To alleviate this problem, this paper proposes a parameterized consistency learning-based deep chaos neural network (Deep PCNN) method.
The Deep PCNN method can significantly reduce the training data cost in constructing a high-order PCE model.
- Score: 3.541245871465521
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Polynomial chaos expansion (PCE) is a powerful surrogate model-based
reliability analysis method in aerospace engineering. Generally, a PCE model
with a higher expansion order is usually required to obtain an accurate
surrogate model for some non-linear complex stochastic systems. However, the
high-order PCE increases the labeled training data cost for solving the
expansion coefficients. To alleviate this problem, this paper proposes a
parameterized consistency learning-based deep polynomial chaos neural network
(Deep PCNN) method, including the low-order adaptive PCE model (the auxiliary
model) and the high-order polynomial chaos neural network (the main model). The
expansion coefficients of the high-order main model are parameterized into the
learnable weights of the polynomial chaos neural network. The auxiliary model
uses a proposed unsupervised consistency loss function to assist in training
the main model. The Deep PCNN method can significantly reduce the training data
cost in constructing a high-order PCE model without losing surrogate model
accuracy by using a small amount of labeled data and many unlabeled data. A
numerical example validates the effectiveness of the Deep PCNN method, and the
Deep PCNN method is applied to analyze the reliability of two aerospace
engineering systems.
Related papers
- Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Transfer Learning on Multi-Dimensional Data: A Novel Approach to Neural Network-Based Surrogate Modeling [0.0]
Convolutional neural networks (CNNs) have gained popularity as the basis for such surrogate models.
We propose training a CNN surrogate model on a mixture of numerical solutions to both the $d$-dimensional problem and its ($d-1$)-dimensional approximation.
We demonstrate our approach on a multiphase flow test problem, using transfer learning to train a dense fully-convolutional encoder-decoder CNN on the two classes of data.
arXiv Detail & Related papers (2024-10-16T05:07:48Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Evaluation of machine learning architectures on the quantification of
epistemic and aleatoric uncertainties in complex dynamical systems [0.0]
Uncertainty Quantification (UQ) is a self assessed estimate of the model error.
We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks.
We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties.
arXiv Detail & Related papers (2023-06-27T02:35:25Z) - Mini-data-driven Deep Arbitrary Polynomial Chaos Expansion for
Uncertainty Quantification [9.586968666707529]
This paper proposes a deep arbitrary chaos expansion (Deep aPCE) method to improve the balance between surrogate model accuracy and training data cost.
Four numerical examples and an actual engineering problem are used to verify the effectiveness of the Deep aPCE method.
arXiv Detail & Related papers (2021-07-22T02:49:07Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Latent Space Data Assimilation by using Deep Learning [0.0]
Performing Data Assimilation (DA) at a low cost is of prime concern in Earth system modeling.
We incorporate Deep Learning (DL) methods into a DA framework.
We exploit the latent structure provided by autoencoders (AEs) to design an Ensemble Transform Kalman Filter with model error (ETKF-Q) in the latent space.
arXiv Detail & Related papers (2021-04-01T12:25:55Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.