DWTRec: An Efficient Wavelet Enhanced Model for Sequential Recommendation
- URL: http://arxiv.org/abs/2503.23436v4
- Date: Thu, 01 May 2025 06:04:46 GMT
- Title: DWTRec: An Efficient Wavelet Enhanced Model for Sequential Recommendation
- Authors: Sheng Lu, Mingxi Ge, Jiuyi Zhang, Wanli Zhu, Guanjin Li, Fangming Gu,
- Abstract summary: Self-attention mechanism used in Transformer-based sequential recommender models is constantly a low-pass filter.<n>We propose a novel Wavelet enhanced hybrid attention network called DWTRec for sequential recommendation.<n>Our findings suggest a promising approach to improve the modeling power of self-attention.
- Score: 0.8246494848934447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer-based sequential recommender systems have achieved notable successes. Despite their effectiveness, recent studies reveal that self-attention mechanism used in current Transformer-based sequential recommendation models is constantly a low-pass filter which results in several problems. These problems include being incompetent in learning evolving user patterns and capturing users' abrupt interests. To integrate low and high frequency information effectively and overcome these problems, we propose a novel Wavelet enhanced hybrid attention network called DWTRec for sequential recommendation. Due to excellent characteristics of Wavelet Transform in signal processing, we can perform fine-grained signals decomposition while removing noise and capturing trending signals behind non-stationary sequences of user-item interactions. To address the bound distortion problem of the reconstructed signal of Discrete Wavelet Transform, we design a learning algorithm which adapts to different datasets. Our findings suggest a promising approach to improve the modeling power of self-attention. We tested our method on six datasets with different domains, different sparsity levels, and different average sequence lengths. Experiments show that our method outperforms all eight baseline models in recommendation performance on all datasets.
Related papers
- Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - MFRS: A Multi-Frequency Reference Series Approach to Scalable and Accurate Time-Series Forecasting [51.94256702463408]
Time series predictability is derived from periodic characteristics at different frequencies.<n>We propose a novel time series forecasting method based on multi-frequency reference series correlation analysis.<n> Experiments on major open and synthetic datasets show state-of-the-art performance.
arXiv Detail & Related papers (2025-03-11T11:40:14Z) - LMS-AutoTSF: Learnable Multi-Scale Decomposition and Integrated Autocorrelation for Time Series Forecasting [4.075971633195745]
We introduce LMS-AutoTSF, a novel time series forecasting architecture that incorporates autocorrelation.<n>Unlike models that rely on predefined trend and seasonal components, LMS-AutoTSF employs two separate encoders per scale.<n>A key innovation in our approach is the integration of autocorrelation, achieved by computing lagged differences in time steps.
arXiv Detail & Related papers (2024-12-09T09:31:58Z) - FilterNet: Harnessing Frequency Filters for Time Series Forecasting [34.83702192033196]
FilterNet is built upon our proposed learnable frequency filters to extract key informative temporal patterns by selectively passing or attenuating certain components of time series signals.
equipped with the two filters, FilterNet can approximately surrogate the linear and attention mappings widely adopted in time series literature.
arXiv Detail & Related papers (2024-11-03T16:20:41Z) - TSLANet: Rethinking Transformers for Time Series Representation Learning [19.795353886621715]
Time series data is characterized by its intrinsic long and short-range dependencies.
We introduce a novel Time Series Lightweight Network (TSLANet) as a universal convolutional model for diverse time series tasks.
Our experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2024-04-12T13:41:29Z) - Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting [46.63798583414426]
Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis.
Our study demonstrates, through both analytical and empirical evidence, that decomposition is key to containing excessive model inflation.
Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks.
arXiv Detail & Related papers (2024-01-22T13:15:40Z) - Augmenting Radio Signals with Wavelet Transform for Deep Learning-Based
Modulation Recognition [6.793444383222236]
Deep learning for radio modulation recognition has become prevalent in recent years.
In real-world scenarios, it may not be feasible to gather sufficient training data in advance.
Data augmentation is a method used to increase the diversity and quantity of training dataset.
arXiv Detail & Related papers (2023-11-07T06:55:39Z) - Differentiable Grey-box Modelling of Phaser Effects using Frame-based
Spectral Processing [21.053861381437827]
This work presents a differentiable digital signal processing approach to modelling phaser effects.
The proposed model processes audio in short frames to implement a time-varying filter in the frequency domain.
We show that the model can be trained to emulate an analog reference device, while retaining interpretable and adjustable parameters.
arXiv Detail & Related papers (2023-06-02T07:53:41Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Combining Slow and Fast: Complementary Filtering for Dynamics Learning [9.11991227308599]
We propose a learning-based model learning approach to dynamics model learning.
We also propose a hybrid model that requires an additional physics-based simulator.
arXiv Detail & Related papers (2023-02-27T13:32:47Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - FAMLP: A Frequency-Aware MLP-Like Architecture For Domain Generalization [73.41395947275473]
We propose a novel frequency-aware architecture, in which the domain-specific features are filtered out in the transformed frequency domain.
Experiments on three benchmarks demonstrate significant performance, outperforming the state-of-the-art methods by a margin of 3%, 4% and 9%, respectively.
arXiv Detail & Related papers (2022-03-24T07:26:29Z) - Filter-enhanced MLP is All You Need for Sequential Recommendation [89.0974365344997]
In online platforms, logged user behavior data is inevitable to contain noise.
We borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain.
We propose textbfFMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task.
arXiv Detail & Related papers (2022-02-28T05:49:35Z) - Signal Classification using Smooth Coefficients of Multiple wavelets [2.7907613804877283]
Time series signals have become an important construct and have many practical applications.
We propose an approach, which chooses suitable wavelets to transform the data, then combines the output from these transforms to construct a dataset to then apply ensemble classifiers to.
Our experimental results demonstrate the effectiveness of the proposed technique, compared to the approaches that use either raw signal data or a single wavelet transform.
arXiv Detail & Related papers (2021-09-21T06:36:56Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.