Time-Continuous Energy-Conservation Neural Network for Structural
Dynamics Analysis
- URL: http://arxiv.org/abs/2012.14334v1
- Date: Wed, 16 Dec 2020 01:00:56 GMT
- Title: Time-Continuous Energy-Conservation Neural Network for Structural
Dynamics Analysis
- Authors: Yuan Feng, Hexiang Wang, Han Yang, Fangbo Wang
- Abstract summary: A new family of the energy-conservation neural network is introduced, which respects the physical laws.
The proposed model uses the system energy as the last layer of the neural network.
As a case study, a 3-story building earthquake simulation is conducted with realistic earthquake records.
- Score: 2.0952223808496164
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fast and accurate structural dynamics analysis is important for structural
design and damage assessment. Structural dynamics analysis leveraging machine
learning techniques has become a popular research focus in recent years.
Although the basic neural network provides an alternative approach for
structural dynamics analysis, the lack of physics law inside the neural network
limits the model accuracy and fidelity. In this paper, a new family of the
energy-conservation neural network is introduced, which respects the physical
laws. The neural network is explored from a fundamental
single-degree-of-freedom system to a complicated multiple-degrees-of-freedom
system. The damping force and external forces are also considered step by step.
To improve the parallelization of the algorithm, the derivatives of the
structural states are parameterized with the novel energy-conservation neural
network instead of specifying the discrete sequence of structural states. The
proposed model uses the system energy as the last layer of the neural network
and leverages the underlying automatic differentiation graph to incorporate the
system energy naturally, which ultimately improves the accuracy and long-term
stability of structures dynamics response calculation under an earthquake
impact. The trade-off between computation accuracy and speed is discussed. As a
case study, a 3-story building earthquake simulation is conducted with
realistic earthquake records.
Related papers
- Physics Informed Recurrent Neural Networks for Seismic Response
Evaluation of Nonlinear Systems [0.0]
This paper proposes a novel approach for evaluating the dynamic response of multi-degree-of-freedom (MDOF) systems.
The focus of this paper is to evaluate the seismic (earthquake) response of nonlinear structures.
The predicted response will be compared to state-of-the-art methods such as FEA to assess the efficacy of the physics-informed RNN model.
arXiv Detail & Related papers (2023-08-16T20:06:41Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Dynamic Analysis of Nonlinear Civil Engineering Structures using
Artificial Neural Network with Adaptive Training [2.1202971527014287]
In this study, artificial neural networks are developed with adaptive training algorithms.
The networks can successfully predict the time-history response of the shear frame and the rock structure to real ground motion records.
arXiv Detail & Related papers (2021-11-21T21:14:48Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Explainable artificial intelligence for mechanics: physics-informing
neural networks for constitutive models [0.0]
In mechanics, the new and active field of physics-informed neural networks attempts to mitigate this disadvantage by designing deep neural networks on the basis of mechanical knowledge.
We propose a first step towards a physics-forming-in approach, which explains neural networks trained on mechanical data a posteriori.
Therein, the principal component analysis decorrelates the distributed representations in cell states of RNNs and allows the comparison to known and fundamental functions.
arXiv Detail & Related papers (2021-04-20T18:38:52Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Learning the ground state of a non-stoquastic quantum Hamiltonian in a
rugged neural network landscape [0.0]
We investigate a class of universal variational wave-functions based on artificial neural networks.
In particular, we show that in the present setup the neural network expressivity and Monte Carlo sampling are not primary limiting factors.
arXiv Detail & Related papers (2020-11-23T05:25:47Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Geometry Perspective Of Estimating Learning Capability Of Neural
Networks [0.0]
The paper considers a broad class of neural networks with generalized architecture performing simple least square regression with gradient descent (SGD)
The relationship between the generalization capability with the stability of the neural network has also been discussed.
By correlating the principles of high-energy physics with the learning theory of neural networks, the paper establishes a variant of the Complexity-Action conjecture from an artificial neural network perspective.
arXiv Detail & Related papers (2020-11-03T12:03:19Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.