Dynamic Analysis of Nonlinear Civil Engineering Structures using
Artificial Neural Network with Adaptive Training
- URL: http://arxiv.org/abs/2111.13759v1
- Date: Sun, 21 Nov 2021 21:14:48 GMT
- Title: Dynamic Analysis of Nonlinear Civil Engineering Structures using
Artificial Neural Network with Adaptive Training
- Authors: Xiao Pan, Zhizhao Wen, T.Y. Yang
- Abstract summary: In this study, artificial neural networks are developed with adaptive training algorithms.
The networks can successfully predict the time-history response of the shear frame and the rock structure to real ground motion records.
- Score: 2.1202971527014287
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dynamic analysis of structures subjected to earthquake excitation is a
time-consuming process, particularly in the case of extremely small time step
required, or in the presence of high geometric and material nonlinearity.
Performing parametric studies in such cases is even more tedious. The
advancement of computer graphics hardware in recent years enables efficient
training of artificial neural networks that are well-known to be capable of
learning highly nonlinear mappings. In this study, artificial neural networks
are developed with adaptive training algorithms, which enables automatic nodes
generation and layers addition. The hyperbolic tangent function is selected as
the activation function. Stochastic Gradient Descent and Back Propagation
algorithms are adopted to train the networks. The neural networks initiate with
a small number of hidden layers and nodes. During training, the performance of
the network is continuously tracked, and new nodes or layers are added to the
hidden layers if the neural network reaches its capacity. At the end of the
training process, the network with appropriate architecture is automatically
formed. The performance of the networks has been validated for inelastic shear
frames, as well as rocking structures, of which both are first built in finite
element program for dynamic analysis to generate training data. Results have
shown the developed networks can successfully predict the time-history response
of the shear frame and the rock structure subjected to real ground motion
records. The efficiency of the proposed neural networks is also examined, which
shows the computational time can be reduced by 43% by the neural networks
method than FE models. This indicates the trained networks can be utilized to
generate rocking spectrums of structures more efficiently which demands a large
number of time-history analyses.
Related papers
- Peer-to-Peer Learning Dynamics of Wide Neural Networks [10.179711440042123]
We provide an explicit, non-asymptotic characterization of the learning dynamics of wide neural networks trained using popularDGD algorithms.
We validate our analytical results by accurately predicting error and error and for classification tasks.
arXiv Detail & Related papers (2024-09-23T17:57:58Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Dynamical stability and chaos in artificial neural network trajectories along training [3.379574469735166]
We study the dynamical properties of this process by analyzing through this lens the network trajectories of a shallow neural network.
We find hints of regular and chaotic behavior depending on the learning rate regime.
This work also contributes to the cross-fertilization of ideas between dynamical systems theory, network theory and machine learning.
arXiv Detail & Related papers (2024-04-08T17:33:11Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Efficient Neural Architecture Search with Performance Prediction [0.0]
We use a neural architecture search to find the best network architecture for the task at hand.
Existing NAS algorithms generally evaluate the fitness of a new architecture by fully training from scratch.
An end-to-end offline performance predictor is proposed to accelerate the evaluation of sampled architectures.
arXiv Detail & Related papers (2021-08-04T05:44:16Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.