Auto Tensor Singular Value Thresholding: A Non-Iterative and Rank-Free Framework for Tensor Denoising
- URL: http://arxiv.org/abs/2505.06203v1
- Date: Fri, 09 May 2025 17:30:16 GMT
- Title: Auto Tensor Singular Value Thresholding: A Non-Iterative and Rank-Free Framework for Tensor Denoising
- Authors: Hiroki Hasegawa, Yukihiko Okada,
- Abstract summary: Rising dimensionality of real-world datasets limits conventional matrix-based methods in preserving data structure and accuracy.<n>We propose a novel low-rank approximation method for tensor data that avoids these limitations.<n>Our approach applies statistically grounded singular value thresholding to mode-wise matricizations, enabling automatic extraction of significant components.
- Score: 2.002741592555996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In modern data-driven tasks such as classification, optimization, and forecasting, mitigating the effects of intrinsic noise is crucial for improving predictive accuracy. While numerous denoising techniques have been developed, the rising dimensionality of real-world datasets limits conventional matrix-based methods in preserving data structure and accuracy. This challenge has led to increasing interest in tensor-based approaches, which naturally capture multi-way data relationships. However, classical tensor decomposition methods (e.g., HOSVD, HOOI) typically require pre-specified ranks and iterative optimization, making them computationally expensive and less practical. In this work, we propose a novel low-rank approximation method for tensor data that avoids these limitations. Our approach applies statistically grounded singular value thresholding to mode-wise matricizations, enabling automatic extraction of significant components without requiring prior rank specification or iterative refinement. Experiments on synthetic and real-world tensors show that our method consistently outperforms existing techniques in terms of estimation accuracy and computational efficiency, especially in noisy high-dimensional settings.
Related papers
- Outlier-aware Tensor Robust Principal Component Analysis with Self-guided Data Augmentation [21.981038455329013]
We propose a self-guided data augmentation approach that employs adaptive weighting to suppress outlier influence.<n>We show the improvements in both accuracy and computational efficiency compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-04-25T13:03:35Z) - Triply Laplacian Scale Mixture Modeling for Seismic Data Noise Suppression [51.87076090814921]
Sparsity-based tensor recovery methods have shown great potential in suppressing seismic data noise.<n>We propose a novel triply Laplacian scale mixture (TLSM) approach for seismic data noise suppression.
arXiv Detail & Related papers (2025-02-20T08:28:01Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.<n>Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.<n>We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Beyond Fixed Horizons: A Theoretical Framework for Adaptive Denoising Diffusions [1.9116784879310031]
We introduce a new class of generative diffusion models that achieve a time-homogeneous structure for both the noising and denoising processes.<n>A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model.
arXiv Detail & Related papers (2025-01-31T18:23:27Z) - Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training [10.229653770070202]
Training generative models with differential privacy (DP) typically involves injecting noise into gradient updates or adapting the discriminator's training procedure.
We consider the slicing privacy mechanism that injects noise into random low-dimensional projections of the private data.
We present a kernel-based estimator for this divergence, circumventing the need for adversarial training.
arXiv Detail & Related papers (2024-10-25T19:32:58Z) - Impact of Noisy Supervision in Foundation Model Learning [91.56591923244943]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.<n>We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution [62.71425232332837]
We show that training amortized models with noisy labels is inexpensive and surprisingly effective.
This approach significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches.
arXiv Detail & Related papers (2024-01-29T03:42:37Z) - Fine tuning Pre trained Models for Robustness Under Noisy Labels [34.68018860186995]
The presence of noisy labels in a training dataset can significantly impact the performance of machine learning models.
We introduce a novel algorithm called TURN, which robustly and efficiently transfers the prior knowledge of pre-trained models.
arXiv Detail & Related papers (2023-10-24T20:28:59Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.