Hysteretic Behavior Simulation Based on Pyramid Neural
Network:Principle, Network Architecture, Case Study and Explanation
- URL: http://arxiv.org/abs/2206.03990v2
- Date: Mon, 19 Jun 2023 15:52:11 GMT
- Title: Hysteretic Behavior Simulation Based on Pyramid Neural
Network:Principle, Network Architecture, Case Study and Explanation
- Authors: Yongjia Xu, Xinzheng Lu, Yifan Fei, Yuli Huang
- Abstract summary: A surrogate model based on neural networks shows significant potential in balancing efficiency and accuracy.
Its serial information flow and prediction based on single-level features adversely affect the network performance.
A weighted stacked pyramid neural network architecture is proposed herein.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An accurate and efficient simulation of the hysteretic behavior of materials
and components is essential for structural analysis. The surrogate model based
on neural networks shows significant potential in balancing efficiency and
accuracy. However, its serial information flow and prediction based on
single-level features adversely affect the network performance. Therefore, a
weighted stacked pyramid neural network architecture is proposed herein. This
network establishes a pyramid architecture by introducing multi-level shortcuts
to integrate features directly in the output module. In addition, a weighted
stacked strategy is proposed to enhance the conventional feature fusion method.
Subsequently, the redesigned architectures are compared with other commonly
used network architectures. Results show that the redesigned architectures
outperform the alternatives in 87.5% of cases. Meanwhile, the long and
short-term memory abilities of different basic network architectures are
analyzed through a specially designed experiment, which could provide valuable
suggestions for network selection.
Related papers
- Neuromorphic on-chip reservoir computing with spiking neural network architectures [0.562479170374811]
Reservoir computing is a promising approach for harnessing the computational power of recurrent neural networks.
This paper investigates the application of integrate-and-fire neurons within reservoir computing frameworks for two distinct tasks.
We study the reservoir computing performance using a custom integrate-and-fire code, Intel's Lava neuromorphic computing software framework, and via an on-chip implementation in Loihi.
arXiv Detail & Related papers (2024-07-30T05:05:09Z) - Mechanistic Design and Scaling of Hybrid Architectures [114.3129802943915]
We identify and test new hybrid architectures constructed from a variety of computational primitives.
We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis.
We find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures.
arXiv Detail & Related papers (2024-03-26T16:33:12Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Analyze and Design Network Architectures by Recursion Formulas [4.085771561472743]
This work attempts to find an effective way to design new network architectures.
It is discovered that the main difference between network architectures can be reflected in their formulas.
A case study is provided to generate an improved architecture based on ResNet.
Massive experiments are conducted on CIFAR and ImageNet, which witness the significant performance improvements.
arXiv Detail & Related papers (2021-08-18T06:53:30Z) - An End to End Network Architecture for Fundamental Matrix Estimation [14.297068346634351]
We present a novel end-to-end network architecture to estimate fundamental matrix directly from stereo images.
Different deep neural networks in charge of finding correspondences in images, performing outlier rejection and calculating fundamental matrix, are integrated into an end-to-end network architecture.
arXiv Detail & Related papers (2020-10-29T12:48:43Z) - Adversarially Robust Neural Architectures [43.74185132684662]
This paper aims to improve the adversarial robustness of the network from the architecture perspective with NAS framework.
We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters.
Our algorithm empirically achieves the best performance among all the models under various attacks on different datasets.
arXiv Detail & Related papers (2020-09-02T08:52:15Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.