A Neural Network-enhanced Reproducing Kernel Particle Method for
Modeling Strain Localization
- URL: http://arxiv.org/abs/2204.13821v1
- Date: Thu, 28 Apr 2022 23:59:38 GMT
- Title: A Neural Network-enhanced Reproducing Kernel Particle Method for
Modeling Strain Localization
- Authors: Jonghyuk Baek, Jiun-Shyan Chen, Kristen Susuki
- Abstract summary: In this work, neural network-enhanced reproducing kernel particle method (NN-RKPM) is proposed.
The location, orientation, and shape of the solution transition near a localization is automatically captured by the NN approximation.
The effectiveness of the proposed NN-RKPM is verified by a series of numerical verifications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling the localized intensive deformation in a damaged solid requires
highly refined discretization for accurate prediction, which significantly
increases the computational cost. Although adaptive model refinement can be
employed for enhanced effectiveness, it is cumbersome for the traditional
mesh-based methods to perform while modeling the evolving localizations. In
this work, neural network-enhanced reproducing kernel particle method (NN-RKPM)
is proposed, where the location, orientation, and shape of the solution
transition near a localization is automatically captured by the NN
approximation via a block-level neural network optimization. The weights and
biases in the blocked parametrization network control the location and
orientation of the localization. The designed basic four-kernel NN block is
capable of capturing a triple junction or a quadruple junction topological
pattern, while more complicated localization topological patters are captured
by the superposition of multiple four-kernel NN blocks. The standard RK
approximation is then utilized to approximate the smooth part of the solution,
which permits a much coarser discretization than the high-resolution
discretization needed to capture sharp solution transitions with the
conventional methods. A regularization of the neural network approximation is
additionally introduced for discretization-independent material responses. The
effectiveness of the proposed NN-RKPM is verified by a series of numerical
verifications.
Related papers
- Residual resampling-based physics-informed neural network for neutron diffusion equations [7.105073499157097]
The neutron diffusion equation plays a pivotal role in the analysis of nuclear reactors.
Traditional PINN approaches often utilize fully connected network (FCN) architecture.
R2-PINN effectively overcomes the limitations inherent in current methods, providing more accurate and robust solutions for neutron diffusion equations.
arXiv Detail & Related papers (2024-06-23T13:49:31Z) - N-Adaptive Ritz Method: A Neural Network Enriched Partition of Unity for
Boundary Value Problems [1.2200609701777907]
This work introduces a novel neural network-enriched Partition of Unity (NN-PU) approach for solving boundary value problems via artificial neural networks.
The NN enrichment is constructed by combining pre-trained feature-encoded NN blocks with an untrained NN block.
The proposed method offers accurate solutions while notably reducing the computational cost compared to the conventional adaptive refinement in the mesh-based methods.
arXiv Detail & Related papers (2024-01-16T18:11:14Z) - A Neural Network-Based Enrichment of Reproducing Kernel Approximation
for Modeling Brittle Fracture [0.0]
An improved version of the neural network-enhanced Reproducing Kernel Particle Method (NN-RKPM) is proposed for modeling brittle fracture.
The effectiveness of the proposed method is demonstrated by a series of numerical examples involving damage propagation and branching.
arXiv Detail & Related papers (2023-07-04T21:52:09Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - De-homogenization using Convolutional Neural Networks [1.0323063834827415]
This paper presents a deep learning-based de-homogenization method for structural compliance minimization.
For an appropriate choice of parameters, the de-homogenized designs perform within $7-25%$ of the homogenization-based solution.
arXiv Detail & Related papers (2021-05-10T09:50:06Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Optimizing Mode Connectivity via Neuron Alignment [84.26606622400423]
Empirically, the local minima of loss functions can be connected by a learned curve in model space along which the loss remains nearly constant.
We propose a more general framework to investigate effect of symmetry on landscape connectivity by accounting for the weight permutations of networks being connected.
arXiv Detail & Related papers (2020-09-05T02:25:23Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Optimal Rates for Averaged Stochastic Gradient Descent under Neural
Tangent Kernel Regime [50.510421854168065]
We show that the averaged gradient descent can achieve the minimax optimal convergence rate.
We show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate.
arXiv Detail & Related papers (2020-06-22T14:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.