Chirp Localization via Fine-Tuned Transformer Model: A Proof-of-Concept Study
- URL: http://arxiv.org/abs/2503.22713v1
- Date: Mon, 24 Mar 2025 14:27:07 GMT
- Title: Chirp Localization via Fine-Tuned Transformer Model: A Proof-of-Concept Study
- Authors: Nooshin Bahador, Milad Lankarany,
- Abstract summary: Chirp-like patterns in EEG spectrograms are key biomarkers for seizure dynamics.<n>This study bridges this gap by fine-tuning a Vision Transformer (ViT) model and Low-Rank Adaptation (LoRA)<n>We generated 100000 spectrograms with chirp parameters, creating the first large-scale benchmark for chirp localization.
- Score: 0.23020018305241333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spectrograms are pivotal in time-frequency signal analysis, widely used in audio processing and computational neuroscience. Chirp-like patterns in electroencephalogram (EEG) spectrograms (marked by linear or exponential frequency sweep) are key biomarkers for seizure dynamics, but automated tools for their detection, localization, and feature extraction are lacking. This study bridges this gap by fine-tuning a Vision Transformer (ViT) model on synthetic spectrograms, augmented with Low-Rank Adaptation (LoRA) to boost adaptability. We generated 100000 synthetic spectrograms with chirp parameters, creating the first large-scale benchmark for chirp localization. These spectrograms mimic neural chirps using linear or exponential frequency sweep, Gaussian noise, and smoothing. A ViT model, adapted for regression, predicted chirp parameters. LoRA fine-tuned the attention layers, enabling efficient updates to the pre-trained backbone. Training used MSE loss and the AdamW optimizer, with a learning rate scheduler and early stopping to curb overfitting. Only three features were targeted: Chirp Start Time (Onset Time), Chirp Start Frequency (Onset Frequency), and Chirp End Frequency (Offset Frequency). Performance was evaluated via Pearson correlation between predicted and actual labels. Results showed strong alignment: 0.9841 correlation for chirp start time, with stable inference times (137 to 140s) and minimal bias in error distributions. This approach offers a tool for chirp analysis in EEG time-frequency representation, filling a critical methodological void.
Related papers
- Graph-Based Fault Diagnosis for Rotating Machinery: Adaptive Segmentation and Structural Feature Integration [0.0]
This paper proposes a graph-based framework for robust and interpretable multiclass fault diagnosis in rotating machinery.
It integrates entropy-optimized signal segmentation, time-frequency feature extraction, and graph-theoretic modeling to transform vibration signals into structured representations.
The proposed method achieves high diagnostic accuracy when evaluated on two benchmark datasets.
arXiv Detail & Related papers (2025-04-29T13:34:52Z) - RhythmFormer: Extracting Patterned rPPG Signals based on Periodic Sparse Attention [18.412642801957197]
RRhythm is a non-contact method for detecting physiological signals based on physiological videos.<n>This paper proposes a periodic attention mechanism based on temporal attention sparsity induced by periodicity.<n>It achieves state-of-the-art performance in both intra-dataset and cross-dataset evaluations.
arXiv Detail & Related papers (2024-02-20T07:56:02Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Time-to-Green predictions for fully-actuated signal control systems with
supervised learning [56.66331540599836]
This paper proposes a time series prediction framework using aggregated traffic signal and loop detector data.
We utilize state-of-the-art machine learning models to predict future signal phases' duration.
Results based on an empirical data set from a fully-actuated signal control system in Zurich, Switzerland, show that machine learning models outperform conventional prediction methods.
arXiv Detail & Related papers (2022-08-24T07:50:43Z) - Explainable AI Algorithms for Vibration Data-based Fault Detection: Use
Case-adadpted Methods and Critical Evaluation [0.0]
Analyzing vibration data using deep neural network algorithms is an effective way to detect damages in rotating machinery at an early stage.
This work investigates the application of explainable AI (XAI) algorithms to convolutional neural networks for vibration-based condition monitoring.
arXiv Detail & Related papers (2022-07-21T19:57:36Z) - SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with
Adaptive Noise Spectral Shaping [51.698273019061645]
SpecGrad adapts the diffusion noise so that its time-varying spectral envelope becomes close to the conditioning log-mel spectrogram.
It is processed in the time-frequency domain to keep the computational cost almost the same as the conventional DDPM-based neural vocoders.
arXiv Detail & Related papers (2022-03-31T02:08:27Z) - Simpler is better: spectral regularization and up-sampling techniques
for variational autoencoders [1.2234742322758418]
characterization of the spectral behavior of generative models based on neural networks remains an open issue.
Recent research has focused heavily on generative adversarial networks and the high-frequency discrepancies between real and generated images.
We propose a simple 2D Fourier transform-based spectral regularization loss for the Variational Autoencoders (VAEs)
arXiv Detail & Related papers (2022-01-19T11:49:57Z) - Hankel-structured Tensor Robust PCA for Multivariate Traffic Time Series
Anomaly Detection [9.067182100565695]
This study proposes a Hankel-structured tensor version of RPCA for anomaly detection in spatial data.
We decompose the corrupted matrix into a low-rank Hankel tensor and a sparse matrix.
We evaluate the method by synthetic data and passenger flow time series.
arXiv Detail & Related papers (2021-10-08T19:35:39Z) - Deep Autoregressive Models with Spectral Attention [74.08846528440024]
We propose a forecasting architecture that combines deep autoregressive models with a Spectral Attention (SA) module.
By characterizing in the spectral domain the embedding of the time series as occurrences of a random process, our method can identify global trends and seasonality patterns.
Two spectral attention models, global and local to the time series, integrate this information within the forecast and perform spectral filtering to remove time series's noise.
arXiv Detail & Related papers (2021-07-13T11:08:47Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.