A Neural Network-Based Enrichment of Reproducing Kernel Approximation
for Modeling Brittle Fracture
- URL: http://arxiv.org/abs/2307.01937v1
- Date: Tue, 4 Jul 2023 21:52:09 GMT
- Title: A Neural Network-Based Enrichment of Reproducing Kernel Approximation
for Modeling Brittle Fracture
- Authors: Jonghyuk Baek, Jiun-Shyan Chen
- Abstract summary: An improved version of the neural network-enhanced Reproducing Kernel Particle Method (NN-RKPM) is proposed for modeling brittle fracture.
The effectiveness of the proposed method is demonstrated by a series of numerical examples involving damage propagation and branching.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical modeling of localizations is a challenging task due to the evolving
rough solution in which the localization paths are not predefined. Despite
decades of efforts, there is a need for innovative discretization-independent
computational methods to predict the evolution of localizations. In this work,
an improved version of the neural network-enhanced Reproducing Kernel Particle
Method (NN-RKPM) is proposed for modeling brittle fracture. In the proposed
method, a background reproducing kernel (RK) approximation defined on a coarse
and uniform discretization is enriched by a neural network (NN) approximation
under a Partition of Unity framework. In the NN approximation, the deep neural
network automatically locates and inserts regularized discontinuities in the
function space. The NN-based enrichment functions are then patched together
with RK approximation functions using RK as a Partition of Unity patching
function. The optimum NN parameters defining the location, orientation, and
displacement distribution across location together with RK approximation
coefficients are obtained via the energy-based loss function minimization. To
regularize the NN-RK approximation, a constraint on the spatial gradient of the
parametric coordinates is imposed in the loss function. Analysis of the
convergence properties shows that the solution convergence of the proposed
method is guaranteed. The effectiveness of the proposed method is demonstrated
by a series of numerical examples involving damage propagation and branching.
Related papers
- A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - N-Adaptive Ritz Method: A Neural Network Enriched Partition of Unity for
Boundary Value Problems [1.2200609701777907]
This work introduces a novel neural network-enriched Partition of Unity (NN-PU) approach for solving boundary value problems via artificial neural networks.
The NN enrichment is constructed by combining pre-trained feature-encoded NN blocks with an untrained NN block.
The proposed method offers accurate solutions while notably reducing the computational cost compared to the conventional adaptive refinement in the mesh-based methods.
arXiv Detail & Related papers (2024-01-16T18:11:14Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - A Neural Network-enhanced Reproducing Kernel Particle Method for
Modeling Strain Localization [0.0]
In this work, neural network-enhanced reproducing kernel particle method (NN-RKPM) is proposed.
The location, orientation, and shape of the solution transition near a localization is automatically captured by the NN approximation.
The effectiveness of the proposed NN-RKPM is verified by a series of numerical verifications.
arXiv Detail & Related papers (2022-04-28T23:59:38Z) - Non-intrusive reduced order modeling of poroelasticity of heterogeneous
media based on a discontinuous Galerkin approximation [0.0]
We present a non-intrusive model reduction framework for linear poroelasticity problems in heterogeneous porous media.
We utilize the interior penalty discontinuous Galerkin (DG) method as a full order solver to handle discontinuity.
We show that our framework provides reasonable approximations of the DG solution, but it is significantly faster.
arXiv Detail & Related papers (2021-01-28T04:21:06Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Optimal Rates for Averaged Stochastic Gradient Descent under Neural
Tangent Kernel Regime [50.510421854168065]
We show that the averaged gradient descent can achieve the minimax optimal convergence rate.
We show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate.
arXiv Detail & Related papers (2020-06-22T14:31:37Z) - Nonconvex sparse regularization for deep neural networks and its
optimality [1.9798034349981162]
Deep neural network (DNN) estimators can attain optimal convergence rates for regression and classification problems.
We propose a novel penalized estimation method for sparse DNNs.
We prove that the sparse-penalized estimator can adaptively attain minimax convergence rates for various nonparametric regression problems.
arXiv Detail & Related papers (2020-03-26T07:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.