Robust Tensor Decomposition for Image Representation Based on
Generalized Correntropy
- URL: http://arxiv.org/abs/2005.04605v1
- Date: Sun, 10 May 2020 08:46:52 GMT
- Title: Robust Tensor Decomposition for Image Representation Based on
Generalized Correntropy
- Authors: Miaohua Zhang, Yongsheng Gao, Changming Sun, Michael Blumenstein
- Abstract summary: We propose a new robust tensor decomposition method using generalized correntropy criterion (Corr-Tensor)
A Lagrange multiplier method is used to effectively optimize the generalized correntropy objective function in an iterative manner.
Experimental results demonstrated that the proposed method significantly reduces the reconstruction error on face reconstruction and improves the accuracies on handwritten digit recognition and facial image clustering.
- Score: 37.968665739578185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional tensor decomposition methods, e.g., two dimensional principal
component analysis and two dimensional singular value decomposition, that
minimize mean square errors, are sensitive to outliers. To overcome this
problem, in this paper we propose a new robust tensor decomposition method
using generalized correntropy criterion (Corr-Tensor). A Lagrange multiplier
method is used to effectively optimize the generalized correntropy objective
function in an iterative manner. The Corr-Tensor can effectively improve the
robustness of tensor decomposition with the existence of outliers without
introducing any extra computational cost. Experimental results demonstrated
that the proposed method significantly reduces the reconstruction error on face
reconstruction and improves the accuracies on handwritten digit recognition and
facial image clustering.
Related papers
- Enabling Tensor Decomposition for Time-Series Classification via A Simple Pseudo-Laplacian Contrast [26.28414569796961]
We propose a novel Pseudo Laplacian Contrast (PLC) tensor decomposition framework.
It integrates the data augmentation and cross-view Laplacian to enable the extraction of class-aware representations.
Experiments on various datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-23T16:48:13Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Scalable and Robust Tensor Ring Decomposition for Large-scale Data [12.02023514105999]
We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
arXiv Detail & Related papers (2023-05-15T22:08:47Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Stable Low-rank Tensor Decomposition for Compression of Convolutional
Neural Network [19.717842489217684]
This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels.
We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression.
We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.
arXiv Detail & Related papers (2020-08-12T17:10:12Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.