ModeConv: A Novel Convolution for Distinguishing Anomalous and Normal Structural Behavior
- URL: http://arxiv.org/abs/2407.00140v1
- Date: Fri, 28 Jun 2024 14:46:17 GMT
- Title: ModeConv: A Novel Convolution for Distinguishing Anomalous and Normal Structural Behavior
- Authors: Melanie Schaller, Daniel Schlör, Andreas Hotho,
- Abstract summary: Eigenmodes provide insights into structural dynamics and deviations from expected states.
We propose ModeConv to automatically capture and analyze changes in eigenmodes.
ModeConv demonstrates computational efficiency improvements, resulting in reduced runtime for model calculations.
- Score: 2.6236811900685706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: External influences such as traffic and environmental factors induce vibrations in structures, leading to material degradation over time. These vibrations result in cracks due to the material's lack of plasticity compromising structural integrity. Detecting such damage requires the installation of vibration sensors to capture the internal dynamics. However, distinguishing relevant eigenmodes from external noise necessitates the use of Deep Learning models. The detection of changes in eigenmodes can be used to anticipate these shifts in material properties and to discern between normal and anomalous structural behavior. Eigenmodes, representing characteristic vibration patterns, provide insights into structural dynamics and deviations from expected states. Thus, we propose ModeConv to automatically capture and analyze changes in eigenmodes, facilitating effective anomaly detection in structures and material properties. In the conducted experiments, ModeConv demonstrates computational efficiency improvements, resulting in reduced runtime for model calculations. The novel ModeConv neural network layer is tailored for temporal graph neural networks, in which every node represents one sensor. ModeConv employs a singular value decomposition based convolutional filter design for complex numbers and leverages modal transformation in lieu of Fourier or Laplace transformations in spectral graph convolutions. We include a mathematical complexity analysis illustrating the runtime reduction.
Related papers
- Spectral Normalization and Voigt-Reuss net: A universal approach to microstructure-property forecasting with physical guarantees [0.0]
A crucial step in the design process is the rapid evaluation of effective mechanical, thermal, or, in general, elasticity properties.
The classical simulation-based approach, which uses, e.g., finite elements and FFT-based solvers, can require substantial computational resources.
We propose a novel spectral normalization scheme that a priori enforces these bounds.
arXiv Detail & Related papers (2025-04-01T12:21:57Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Efficient dynamic modal load reconstruction using physics-informed Gaussian processes based on frequency-sparse Fourier basis functions [0.0]
This paper presents an efficient dynamic load reconstruction method using physics-informed Gaussian processes (GP)
The GP's covariance matrices are built using the description of the system dynamics, and the model is trained using structural response measurements.
The developed model holds potential for applications in structural health monitoring, damage prognosis, and load model validation.
arXiv Detail & Related papers (2025-03-12T14:16:27Z) - Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning [30.781578037476347]
We introduce a novel approach to modeling transformer architectures using highly flexible non-autonomous neural ordinary differential equations (ODEs)
Our proposed model parameterizes all weights of attention and feed-forward blocks through neural networks, expressing these weights as functions of a continuous layer index.
Our neural ODE transformer demonstrates performance comparable to or better than vanilla transformers across various configurations and datasets.
arXiv Detail & Related papers (2025-03-03T09:12:14Z) - Neural Network Modeling of Microstructure Complexity Using Digital Libraries [1.03590082373586]
We evaluate the performance of artificial and spiking neural networks in learning and predicting fatigue crack growth and Turing pattern development.
Our assessment suggests that the leaky integrate-and-fire neuron model offers superior predictive accuracy with fewer parameters and less memory usage.
arXiv Detail & Related papers (2025-01-30T07:44:21Z) - Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes [24.723536390322582]
tensor decomposition is an important tool for multiway data analysis.
We propose Dynamic EMbedIngs fOr dynamic algorithm dEcomposition (DEMOTE)
We show the advantage of our approach in both simulation study and real-world applications.
arXiv Detail & Related papers (2023-10-30T15:49:45Z) - Impact of conditional modelling for a universal autoregressive quantum
state [0.0]
We introduce filters as analogues to convolutional layers in neural networks to incorporate translationally symmetrized correlations in arbitrary quantum states.
We analyze the impact of the resulting inductive biases on variational flexibility, symmetries, and conserved quantities.
arXiv Detail & Related papers (2023-06-09T14:17:32Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Learning Deep Implicit Fourier Neural Operators (IFNOs) with
Applications to Heterogeneous Material Modeling [3.9181541460605116]
We propose to use data-driven modeling to predict a material's response without using conventional models.
The material response is modeled by learning the implicit mappings between loading conditions and the resultant displacement and/or damage fields.
We demonstrate the performance of our proposed method for a number of examples, including hyperelastic, anisotropic and brittle materials.
arXiv Detail & Related papers (2022-03-15T19:08:13Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - A Novel Approach for Deterioration and Damage Identification in Building
Structures Based on Stockwell-Transform and Deep Convolutional Neural Network [11.596550916365574]
A deterioration and damage identification procedure (DIP) is presented and applied to building models.
A DIP is designed utilizing low-cost ambient vibrations to analyze the acceleration responses using the Stockwell transform (ST) to generate spectrograms.
To the best of our knowledge, this is the first time that both damage and deterioration are evaluated on building models through a combination of ST and CNN with high accuracy.
arXiv Detail & Related papers (2021-11-11T11:31:37Z) - Data Augmentation Through Monte Carlo Arithmetic Leads to More
Generalizable Classification in Connectomics [0.0]
We use Monte Carlo Arithmetic to perturb a structural connectome estimation pipeline.
The perturbed networks were captured in an augmented dataset, which was then used for an age classification task.
We find that this benefit does not hinge on a large number of perturbations, suggesting that even minimally perturbing a dataset adds meaningful variance which can be captured in the subsequently designed models.
arXiv Detail & Related papers (2021-09-20T16:06:05Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.