Cubic-Regularized Newton for Spectral Constrained Matrix Optimization
and its Application to Fairness
- URL: http://arxiv.org/abs/2209.01229v1
- Date: Fri, 2 Sep 2022 18:11:05 GMT
- Title: Cubic-Regularized Newton for Spectral Constrained Matrix Optimization
and its Application to Fairness
- Authors: Casey Garner, Gilad Lerman, Shuzhong Zhang
- Abstract summary: Matrix functions are utilized to rewrite smooth spectral constrained matrix optimization problems.
A new convergence analysis is provided for cubic-regularized Newton for matrix vector spaces.
- Score: 9.649070872824957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matrix functions are utilized to rewrite smooth spectral constrained matrix
optimization problems as smooth unconstrained problems over the set of
symmetric matrices which are then solved via the cubic-regularized Newton
method. A second-order chain rule identity for matrix functions is proven to
compute the higher-order derivatives to implement cubic-regularized Newton, and
a new convergence analysis is provided for cubic-regularized Newton for matrix
vector spaces. We demonstrate the applicability of our approach by conducting
numerical experiments on both synthetic and real datasets. In our experiments,
we formulate a new model for estimating fair and robust covariance matrices in
the spirit of the Tyler's M-estimator (TME) model and demonstrate its
advantage.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Semi-supervised Symmetric Non-negative Matrix Factorization with Low-Rank Tensor Representation [27.14442336413482]
Semi-supervised symmetric non-negative matrix factorization (SNMF)
We propose a novel SNMF model by seeking low-rank representation for the tensor synthesized by the pairwise constraint matrix.
We then propose an enhanced SNMF model, making the embedding matrix tailored to the above tensor low-rank representation.
arXiv Detail & Related papers (2024-05-04T14:58:47Z) - On confidence intervals for precision matrices and the
eigendecomposition of covariance matrices [20.20416580970697]
This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension.
We derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix.
arXiv Detail & Related papers (2022-08-25T10:12:53Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.