DyWPE: Signal-Aware Dynamic Wavelet Positional Encoding for Time Series Transformers
- URL: http://arxiv.org/abs/2509.14640v1
- Date: Thu, 18 Sep 2025 05:37:33 GMT
- Title: DyWPE: Signal-Aware Dynamic Wavelet Positional Encoding for Time Series Transformers
- Authors: Habib Irani, Vangelis Metsis,
- Abstract summary: We introduce Dynamic Wavelet Positional.<n>time series (DyWPE), a novel signal-aware framework that generates positional embeddings directly from input time using the Discrete Wavelet Transform (DWT)<n>DyWPE consistently outperforms eight existing state-of-the-art positional encoding methods, achieving average relative improvements of 9.1% compared to baseline sinusoidal absolute position encoding in biomedical signals.
- Score: 1.4524096882720263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing positional encoding methods in transformers are fundamentally signal-agnostic, deriving positional information solely from sequence indices while ignoring the underlying signal characteristics. This limitation is particularly problematic for time series analysis, where signals exhibit complex, non-stationary dynamics across multiple temporal scales. We introduce Dynamic Wavelet Positional Encoding (DyWPE), a novel signal-aware framework that generates positional embeddings directly from input time series using the Discrete Wavelet Transform (DWT). Comprehensive experiments in ten diverse time series datasets demonstrate that DyWPE consistently outperforms eight existing state-of-the-art positional encoding methods, achieving average relative improvements of 9.1\% compared to baseline sinusoidal absolute position encoding in biomedical signals, while maintaining competitive computational efficiency.
Related papers
- WaveFormer: Wavelet Embedding Transformer for Biomedical Signals [1.2922946578413579]
We propose a transformer architecture that integrates wavelet decomposition at two critical stages: embedding construction and positional encoding.<n>We evaluate WaveFormer on eight diverse datasets spanning human activity recognition and brain signal analysis, with sequence lengths ranging from 50 to 3000 timesteps and channel counts from 1 to 144.
arXiv Detail & Related papers (2026-02-12T17:20:43Z) - WaveFormer: A Lightweight Transformer Model for sEMG-based Gesture Recognition [18.978031999678507]
WaveFormer is a lightweight transformer-based architecture tailored for sEMG gesture recognition.<n>Our model integrates time-domain and frequency-domain features through a novel learnable wavelet transform, enhancing feature extraction.<n>With just 3.1 million parameters, WaveFormer achieves 95% classification accuracy on the EPN612 dataset, outperforming larger models.
arXiv Detail & Related papers (2025-06-12T04:07:11Z) - Positional Encoding in Transformer-Based Time Series Models: A Survey [1.4524096882720263]
This survey systematically examines existing techniques for positional encoding in transformer-based time series models.<n>Data characteristics like sequence length, signal complexity, and dimensionality significantly influence method effectiveness.<n>We outline key challenges and suggest potential research directions to enhance positional encoding strategies.
arXiv Detail & Related papers (2025-02-17T23:21:42Z) - Toward Relative Positional Encoding in Spiking Transformers [52.62008099390541]
Spiking neural networks (SNNs) are bio-inspired networks that mimic how neurons in the brain communicate through discrete spikes.<n>We introduce several strategies to approximate relative positional encoding (RPE) in spiking Transformers.
arXiv Detail & Related papers (2025-01-28T06:42:37Z) - PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting [82.03373838627606]
Self-attention mechanism in Transformer architecture requires positional embeddings to encode temporal order in time series prediction.
We argue that this reliance on positional embeddings restricts the Transformer's ability to effectively represent temporal sequences.
We present a model integrating PRE with a standard Transformer encoder, demonstrating state-of-the-art performance on various real-world datasets.
arXiv Detail & Related papers (2024-08-20T01:56:07Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - Improving Transformers using Faithful Positional Encoding [55.30212768657544]
We propose a new positional encoding method for a neural network architecture called the Transformer.
Unlike the standard sinusoidal positional encoding, our approach has a guarantee of not losing information about the positional order of the input sequence.
arXiv Detail & Related papers (2024-05-15T03:17:30Z) - Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals [7.381259294661687]
We propose a frequency-aware masked autoencoder that learns to parameterize the representation of biosignals in the frequency space.
The resulting architecture effectively utilizes multimodal information during pretraining, and can be seamlessly adapted to diverse tasks and modalities at test time.
arXiv Detail & Related papers (2023-09-12T02:59:26Z) - Improving Position Encoding of Transformers for Multivariate Time Series
Classification [5.467400475482668]
We propose a new absolute position encoding method dedicated to time series data called time Absolute Position.
We then propose a novel time series classification (MTSC) model combining tAPE/eRPE and convolution-based input encoding named ConvTran to improve the position and data embedding of time series data.
arXiv Detail & Related papers (2023-05-26T05:30:04Z) - WavSpA: Wavelet Space Attention for Boosting Transformers' Long Sequence
Learning Ability [31.791279777902957]
Recent works show that learning attention in the Fourier space can improve the long sequence learning capability of Transformers.
We argue that wavelet transform shall be a better choice because it captures both position and frequency information with linear time complexity.
We propose Wavelet Space Attention (WavSpA) that facilitates attention learning in a learnable wavelet coefficient space.
arXiv Detail & Related papers (2022-10-05T02:37:59Z) - Learnable Fourier Features for Multi-DimensionalSpatial Positional
Encoding [96.9752763607738]
We propose a novel positional encoding method based on learnable Fourier features.
Our experiments show that our learnable feature representation for multi-dimensional positional encoding outperforms existing methods.
arXiv Detail & Related papers (2021-06-05T04:40:18Z) - Modulated Periodic Activations for Generalizable Local Functional
Representations [113.64179351957888]
We present a new representation that generalizes to multiple instances and achieves state-of-the-art fidelity.
Our approach produces general functional representations of images, videos and shapes, and achieves higher reconstruction quality than prior works that are optimized for a single signal.
arXiv Detail & Related papers (2021-04-08T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.