A Deep Collocation Method for the Bending Analysis of Kirchhoff Plate
- URL: http://arxiv.org/abs/2102.02617v1
- Date: Thu, 4 Feb 2021 14:01:05 GMT
- Title: A Deep Collocation Method for the Bending Analysis of Kirchhoff Plate
- Authors: Hongwei Guo, Xiaoying Zhuang, Timon Rabczuk
- Abstract summary: In this paper, a deep collocation (DCM) for thin plate bending problems is proposed.
This method takes advantage of computational graphs and backpropagation algorithms involved in deep learning.
The proposed DCM uses a deep neural network to approximate the continuous deflection, and is proved to be suitable to the bending analysis of Kirchhoff plate geometries.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a deep collocation method (DCM) for thin plate bending
problems is proposed. This method takes advantage of computational graphs and
backpropagation algorithms involved in deep learning. Besides, the proposed DCM
is based on a feedforward deep neural network (DNN) and differs from most
previous applications of deep learning for mechanical problems. First, batches
of randomly distributed collocation points are initially generated inside the
domain and along the boundaries. A loss function is built with the aim that the
governing partial differential equations (PDEs) of Kirchhoff plate bending
problems, and the boundary/initial conditions are minimised at those
collocation points. A combination of optimizers is adopted in the
backpropagation process to minimize the loss function so as to obtain the
optimal hyperparameters. In Kirchhoff plate bending problems, the C1 continuity
requirement poses significant difficulties in traditional mesh-based methods.
This can be solved by the proposed DCM, which uses a deep neural network to
approximate the continuous transversal deflection, and is proved to be suitable
to the bending analysis of Kirchhoff plate of various geometries.
Related papers
- A Nonoverlapping Domain Decomposition Method for Extreme Learning Machines: Elliptic Problems [0.0]
Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network.
In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation.
arXiv Detail & Related papers (2024-06-22T23:25:54Z) - Deep Backward and Galerkin Methods for the Finite State Master Equation [12.570464662548787]
This paper proposes and analyzes two neural network methods to solve the master equation for finite-state mean field games.
We prove two types of results: there exist neural networks that make the algorithms' loss functions arbitrarily small, and conversely, if the losses are small, then the neural networks are good approximations of the master equation's solution.
arXiv Detail & Related papers (2024-03-08T01:12:11Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Good Lattice Training: Physics-Informed Neural Networks Accelerated by
Number Theory [7.462336024223669]
We propose a new technique called good lattice training (GLT) for PINNs.
GLT offers a set of collocation points that are effective even with a small number of points and for multi-dimensional spaces.
Our experiments demonstrate that GLT requires 2--20 times fewer collocation points than uniformly random sampling or Latin hypercube sampling.
arXiv Detail & Related papers (2023-07-26T00:01:21Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - A coarse space acceleration of deep-DDM [0.0]
We present an extension of the recently proposed deep-ddm approach to solving PDEs.
We show that coarse space correction is able to alleviate the deterioration of the convergence of the solver.
Experimental results demonstrate that our approach induces a remarkable acceleration of the deep-ddm method.
arXiv Detail & Related papers (2021-12-07T14:41:28Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.