Robust Tensor Completion via Gradient Tensor Nulclear L1-L2 Norm for Traffic Data Recovery
- URL: http://arxiv.org/abs/2506.22732v1
- Date: Sat, 28 Jun 2025 02:38:01 GMT
- Title: Robust Tensor Completion via Gradient Tensor Nulclear L1-L2 Norm for Traffic Data Recovery
- Authors: Hao Shu, Jicheng Li, Tianyv Lei, Lijun Sun,
- Abstract summary: We propose a Robust Completion via Nuclear L1-L2 Norm (RTC-NL) model, which exploits both global low-rankness and local consistency without trade-off parameter, but also effectively handles the dual challenges of missing data and noise in traffic data.
- Score: 14.96194593196997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world scenarios, spatiotemporal traffic data frequently experiences dual degradation from missing values and noise caused by sensor malfunctions and communication failures. Therefore, effective data recovery methods are essential to ensure the reliability of downstream data-driven applications. while classical tensor completion methods have been widely adopted, they are incapable of modeling noise, making them unsuitable for complex scenarios involving simultaneous data missingness and noise interference. Existing Robust Tensor Completion (RTC) approaches offer potential solutions by separately modeling the actual tensor data and noise. However, their effectiveness is often constrained by the over-relaxation of convex rank surrogates and the suboptimal utilization of local consistency, leading to inadequate model accuracy. To address these limitations, we first introduce the tensor L1-L2 norm, a novel non-convex tensor rank surrogate that functions as an effective low-rank representation tool. Leveraging an advanced feature fusion strategy, we further develop the gradient tensor L1-L2 norm by incorporating the tensor L1-L2 norm in the gradient domain. By integrating the gradient tensor nuclear L1-L2 norm into the RTC framework, we propose the Robust Tensor Completion via Gradient Tensor Nuclear L1-L2 Norm (RTC-GTNLN) model, which not only fully exploits both global low-rankness and local consistency without trade-off parameter, but also effectively handles the dual degradation challenges of missing data and noise in traffic data. Extensive experiments conducted on multiple real-world traffic datasets demonstrate that the RTC-GTNLN model consistently outperforms existing state-of-the-art methods in complex recovery scenarios involving simultaneous missing values and noise.
Related papers
- Latent Factorization of Tensors with Threshold Distance Weighted Loss for Traffic Data Estimation [4.079031335530995]
In real-word traffic data collection processes, issues such as communication failures often lead to incomplete or corrupted datasets.<n>Latent factorization of outliers (LFT) model has emerged as widely adopted and effective solution.<n>This paper proposes a threshold distance weighted (TDW) loss sensitivity-ind Latent Factorization of outliers (TDFTWL) model.<n>The proposed TDFTWL model consistently outperforms state-of-the-art approaches in terms of both accuracy and computational efficiency.
arXiv Detail & Related papers (2025-06-11T05:36:13Z) - Restoration Score Distillation: From Corrupted Diffusion Pretraining to One-Step High-Quality Generation [82.39763984380625]
We propose textitRestoration Score Distillation (RSD), a principled generalization of Denoising Score Distillation (DSD)<n>RSD accommodates a broader range of corruption types, such as blurred, incomplete, or low-resolution images.<n>It consistently surpasses its teacher model across diverse restoration tasks on both natural and scientific datasets.
arXiv Detail & Related papers (2025-05-19T17:21:03Z) - Beyond the convexity assumption: Realistic tabular data generation under quantifier-free real linear constraints [4.956977275061968]
Disjunctive Refinement Layer (DRL) is a layer designed to enforce alignment of data limitation with background knowledge specified in user-defined constraints.<n>DRL is the first method able to automatically make deep learning models inherently compliant with constraints as expressive as quantifier-free linear formulas.
arXiv Detail & Related papers (2025-02-25T14:20:05Z) - ResFlow: Fine-tuning Residual Optical Flow for Event-based High Temporal Resolution Motion Estimation [50.80115710105251]
Event cameras hold significant promise for high-temporal-resolution (HTR) motion estimation.<n>We propose a residual-based paradigm for estimating HTR optical flow with event data.
arXiv Detail & Related papers (2024-12-12T09:35:47Z) - On the Wasserstein Convergence and Straightness of Rectified Flow [54.580605276017096]
Rectified Flow (RF) is a generative model that aims to learn straight flow trajectories from noise to data.<n>We provide a theoretical analysis of the Wasserstein distance between the sampling distribution of RF and the target distribution.<n>We present general conditions guaranteeing uniqueness and straightness of 1-RF, which is in line with previous empirical findings.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - A PID-Controlled Non-Negative Tensor Factorization Model for Analyzing Missing Data in NILM [0.0]
Non-Intrusive Load Monitoring (NILM) has become an essential tool in smart grid and energy management.
Traditional imputation methods, such as linear and matrix factorization, struggle with nonlinear relationships and are sensitive to sparse data.
This paper proposes a Proportional-Integral-Derivative (PID) Non-Negative Latent Factorization of tensor (PNLF) model, which dynamically adjusts parameter gradients to improve convergence, stability, and accuracy.
arXiv Detail & Related papers (2024-03-09T10:01:49Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - A Parameter-free Nonconvex Low-rank Tensor Completion Model for
Spatiotemporal Traffic Data Recovery [14.084532939272766]
Traffic data chronically suffer from missing corruption, leading to accuracy and utility reduction in subsequentITS applications.
The proposed method outperforms other methods in both missing corrupted data recovery.
arXiv Detail & Related papers (2022-09-28T02:29:34Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Noisy Tensor Completion via Low-rank Tensor Ring [41.86521269183527]
tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations.
Existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries.
This paper proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations.
arXiv Detail & Related papers (2022-03-14T14:09:43Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.