Wavelet Networks: Scale-Translation Equivariant Learning From Raw
Time-Series
- URL: http://arxiv.org/abs/2006.05259v2
- Date: Sun, 21 Jan 2024 11:58:49 GMT
- Title: Wavelet Networks: Scale-Translation Equivariant Learning From Raw
Time-Series
- Authors: David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn
- Abstract summary: We find that scale-translation equivariant mappings share strong resemblance with the wavelet transform.
Inspired by this resemblance, we term our networks Wavelet Networks, and show that they perform nested non-linear wavelet-like time-frequency transforms.
- Score: 31.73386289965465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Leveraging the symmetries inherent to specific data domains for the
construction of equivariant neural networks has lead to remarkable improvements
in terms of data efficiency and generalization. However, most existing research
focuses on symmetries arising from planar and volumetric data, leaving a
crucial data source largely underexplored: time-series. In this work, we fill
this gap by leveraging the symmetries inherent to time-series for the
construction of equivariant neural network. We identify two core symmetries:
*scale and translation*, and construct scale-translation equivariant neural
networks for time-series learning. Intriguingly, we find that scale-translation
equivariant mappings share strong resemblance with the wavelet transform.
Inspired by this resemblance, we term our networks Wavelet Networks, and show
that they perform nested non-linear wavelet-like time-frequency transforms.
Empirical results show that Wavelet Networks outperform conventional CNNs on
raw waveforms, and match strongly engineered spectrogram techniques across
several tasks and time-series types, including audio, environmental sounds, and
electrical signals. Our code is publicly available at
https://github.com/dwromero/wavelet_networks.
Related papers
- WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Time Scale Network: A Shallow Neural Network For Time Series Data [18.46091267922322]
Time series data is often composed of information at multiple time scales.
Deep learning strategies exist to capture this information, but many make networks larger, require more data, are more demanding to compute, and are difficult to interpret.
We present a minimal, computationally efficient Time Scale Network combining the translation and dilation sequence used in discrete wavelet transforms with traditional convolutional neural networks and back-propagation.
arXiv Detail & Related papers (2023-11-10T16:39:55Z) - WFTNet: Exploiting Global and Local Periodicity in Long-term Time Series
Forecasting [61.64303388738395]
We propose a Wavelet-Fourier Transform Network (WFTNet) for long-term time series forecasting.
Tests on various time series datasets show WFTNet consistently outperforms other state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-20T13:44:18Z) - Frequency and Scale Perspectives of Feature Extraction [5.081561820537235]
We analyze the sensitivity of neural networks to frequencies and scales.
We find that neural networks have low- and medium-frequency biases but also prefer different frequency bands for different classes.
These observations lead to the hypothesis that neural networks must learn the ability to extract features at various scales and frequencies.
arXiv Detail & Related papers (2023-02-24T06:37:36Z) - Fast Temporal Wavelet Graph Neural Networks [7.477634824955323]
We propose Fast Temporal Wavelet Graph Neural Networks (FTWGNN) for learning tasks on timeseries data.
We employ Multiresolution Matrix Factorization (MMF) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis.
Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts.
arXiv Detail & Related papers (2023-02-17T01:21:45Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Trainable Wavelet Neural Network for Non-Stationary Signals [0.0]
This work introduces a wavelet neural network to learn a filter-bank specialized to fit non-stationary signals and improve interpretability and performance for digital signal processing.
The network uses a wavelet transform as the first layer of a neural network where the convolution is a parameterized function of the complex Morlet wavelet.
arXiv Detail & Related papers (2022-05-06T16:41:27Z) - TTS-GAN: A Transformer-based Time-Series Generative Adversarial Network [4.989480853499916]
Time-series data is one of the most common types of data used in medical machine learning applications.
We introduce TTS-GAN, a transformer-based GAN which can successfully generate realistic synthetic time-series data sequences.
We use visualizations and dimensionality reduction techniques to demonstrate the similarity of real and generated time-series data.
arXiv Detail & Related papers (2022-02-06T03:05:47Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Three-Way Deep Neural Network for Radio Frequency Map Generation and
Source Localization [67.93423427193055]
Monitoring wireless spectrum over spatial, temporal, and frequency domains will become a critical feature in beyond-5G and 6G communication technologies.
In this paper, we present a Generative Adversarial Network (GAN) machine learning model to interpolate irregularly distributed measurements across the spatial domain.
arXiv Detail & Related papers (2021-11-23T22:25:10Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.