Synergistic Signal Denoising for Multimodal Time Series of Structure
Vibration
- URL: http://arxiv.org/abs/2308.11644v1
- Date: Thu, 17 Aug 2023 00:41:50 GMT
- Title: Synergistic Signal Denoising for Multimodal Time Series of Structure
Vibration
- Authors: Yang Yu, Han Chen
- Abstract summary: This paper introduces a novel deep learning algorithm tailored for the complexities inherent in multimodal vibration signals prevalent in Structural Health Monitoring (SHM)
By amalgamating convolutional and recurrent architectures, the algorithm adeptly captures both localized and prolonged structural behaviors.
Our results showcase significant improvements in predictive accuracy, early damage detection, and adaptability across multiple SHM scenarios.
- Score: 9.144905626316534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structural Health Monitoring (SHM) plays an indispensable role in ensuring
the longevity and safety of infrastructure. With the rapid growth of sensor
technology, the volume of data generated from various structures has seen an
unprecedented surge, bringing forth challenges in efficient analysis and
interpretation. This paper introduces a novel deep learning algorithm tailored
for the complexities inherent in multimodal vibration signals prevalent in SHM.
By amalgamating convolutional and recurrent architectures, the algorithm
adeptly captures both localized and prolonged structural behaviors. The pivotal
integration of attention mechanisms further enhances the model's capability,
allowing it to discern and prioritize salient structural responses from
extraneous noise. Our results showcase significant improvements in predictive
accuracy, early damage detection, and adaptability across multiple SHM
scenarios. In light of the critical nature of SHM, the proposed approach not
only offers a robust analytical tool but also paves the way for more
transparent and interpretable AI-driven SHM solutions. Future prospects include
real-time processing, integration with external environmental factors, and a
deeper emphasis on model interpretability.
Related papers
- SCENT: Robust Spatiotemporal Learning for Continuous Scientific Data via Scalable Conditioned Neural Fields [11.872753517172555]
We present SCENT, a novel framework for scalable and continuity-informed modeling learning.
SCENT unifies representation, reconstruction, and forecasting within a single architecture.
We validate SCENT through extensive simulations and real-world experiments, demonstrating state-of-the-art performance.
arXiv Detail & Related papers (2025-04-16T17:17:31Z) - Towards Explainable Fusion and Balanced Learning in Multimodal Sentiment Analysis [14.029574339845476]
KAN-MCP is a novel framework that integrates the interpretability of Kolmogorov-Arnold Networks (KAN) with the robustness of the Multimodal Clean Pareto (MCPareto) framework.
We introduce the Dimensionality Reduction and Denoising Modal Information Bottleneck (DRD-MIB) method, which jointly denoises and reduces feature dimensionality.
This synergy of interpretability and robustness achieves superior performance on benchmark datasets such as CMU-MOSI, CMU-MOSEI, and CH-SIMS v2.
arXiv Detail & Related papers (2025-04-16T15:00:06Z) - Hybrid machine learning models based on physical patterns to accelerate CFD simulations: a short guide on autoregressive models [3.780691701083858]
This study presents an innovative integration of High-Order Singular Value Decomposition with Long Short-Term Memory (LSTM) architectures to address the complexities of reduced-order modeling (ROM) in fluid dynamics.
The methodology is tested across numerical and experimental data sets, including two- and three-dimensional (2D and 3D) cylinder wake flows, spanning both laminar and turbulent regimes.
The results demonstrate that HOSVD outperforms SVD in all tested scenarios, as evidenced by using different error metrics.
arXiv Detail & Related papers (2025-04-09T10:56:03Z) - Model Hemorrhage and the Robustness Limits of Large Language Models [119.46442117681147]
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment.
We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes.
arXiv Detail & Related papers (2025-03-31T10:16:03Z) - Deconstructing Recurrence, Attention, and Gating: Investigating the transferability of Transformers and Gated Recurrent Neural Networks in forecasting of dynamical systems [0.0]
We decompose the key architectural components of the most powerful neural architectures, namely gating and recurrence in RNNs, and attention mechanisms in transformers.
A key finding is that neural gating and attention improves the accuracy of all standard RNNs in most tasks, while the addition of a notion of recurrence in transformers is detrimental.
arXiv Detail & Related papers (2024-10-03T16:41:51Z) - Fast and Reliable Probabilistic Reflectometry Inversion with Prior-Amortized Neural Posterior Estimation [73.81105275628751]
Finding all structures compatible with reflectometry data is computationally prohibitive for standard algorithms.
We address this lack of reliability with a probabilistic deep learning method that identifies all realistic structures in seconds.
Our method, Prior-Amortized Neural Posterior Estimation (PANPE), combines simulation-based inference with novel adaptive priors.
arXiv Detail & Related papers (2024-07-26T10:29:16Z) - Investigating the Role of Instruction Variety and Task Difficulty in Robotic Manipulation Tasks [50.75902473813379]
This work introduces a comprehensive evaluation framework that systematically examines the role of instructions and inputs in the generalisation abilities of such models.
The proposed framework uncovers the resilience of multimodal models to extreme instruction perturbations and their vulnerability to observational changes.
arXiv Detail & Related papers (2024-07-04T14:36:49Z) - Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - SFANet: Spatial-Frequency Attention Network for Weather Forecasting [54.470205739015434]
Weather forecasting plays a critical role in various sectors, driving decision-making and risk management.
Traditional methods often struggle to capture the complex dynamics of meteorological systems.
We propose a novel framework designed to address these challenges and enhance the accuracy of weather prediction.
arXiv Detail & Related papers (2024-05-29T08:00:15Z) - Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning [11.19088022423885]
We propose a novel MoST learning framework via Self-Supervised Learning, namely MoSSL.
Results on two real-world MoST datasets verify the superiority of our approach compared with the state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-06T08:24:06Z) - Neural Harmonium: An Interpretable Deep Structure for Nonlinear Dynamic
System Identification with Application to Audio Processing [4.599180419117645]
Interpretability helps us understand a model's ability to generalize and reveal its limitations.
We introduce a causal interpretable deep structure for modeling dynamic systems.
Our proposed model makes use of the harmonic analysis by modeling the system in a time-frequency domain.
arXiv Detail & Related papers (2023-10-10T21:32:15Z) - Echotune: A Modular Extractor Leveraging the Variable-Length Nature of Speech in ASR Tasks [4.132793413136553]
We introduce Echo-MSA, a nimble module equipped with a variable-length attention mechanism.
The proposed design captures the variable length feature of speech and addresses the limitations of fixed-length attention.
arXiv Detail & Related papers (2023-09-14T14:51:51Z) - Disentangling Structured Components: Towards Adaptive, Interpretable and
Scalable Time Series Forecasting [52.47493322446537]
We develop a adaptive, interpretable and scalable forecasting framework, which seeks to individually model each component of the spatial-temporal patterns.
SCNN works with a pre-defined generative process of MTS, which arithmetically characterizes the latent structure of the spatial-temporal patterns.
Extensive experiments are conducted to demonstrate that SCNN can achieve superior performance over state-of-the-art models on three real-world datasets.
arXiv Detail & Related papers (2023-05-22T13:39:44Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.