Riemannian Nearest-Regularized Subspace Classification for Polarimetric
SAR images
- URL: http://arxiv.org/abs/2201.00337v1
- Date: Sun, 2 Jan 2022 11:21:59 GMT
- Title: Riemannian Nearest-Regularized Subspace Classification for Polarimetric
SAR images
- Authors: Junfei Shi, Haiyan Jin
- Abstract summary: The proposed method can outperform the state-of-art algorithms even using less features.
A new Tikhonov regularization term is designed to reduce the differences within the same class.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a representation learning method, nearest regularized subspace(NRS)
algorithm is an effective tool to obtain both accuracy and speed for PolSAR
image classification. However, existing NRS methods use the polarimetric
feature vector but the PolSAR original covariance matrix(known as Hermitian
positive definite(HPD)matrix) as the input. Without considering the matrix
structure, existing NRS-based methods cannot learn correlation among channels.
How to utilize the original covariance matrix to NRS method is a key problem.
To address this limit, a Riemannian NRS method is proposed, which consider the
HPD matrices endow in the Riemannian space. Firstly, to utilize the PolSAR
original data, a Riemannian NRS method(RNRS) is proposed by constructing HPD
dictionary and HPD distance metric. Secondly, a new Tikhonov regularization
term is designed to reduce the differences within the same class. Finally, the
optimal method is developed and the first-order derivation is inferred. During
the experimental test, only T matrix is used in the proposed method, while
multiple of features are utilized for compared methods. Experimental results
demonstrate the proposed method can outperform the state-of-art algorithms even
using less features.
Related papers
- Regularized Linear Discriminant Analysis Using a Nonlinear Covariance
Matrix Estimator [11.887333567383239]
Linear discriminant analysis (LDA) is a widely used technique for data classification.
LDA becomes inefficient when the data covariance matrix is ill-conditioned.
Regularized LDA methods have been proposed to cope with such a situation.
arXiv Detail & Related papers (2024-01-31T11:37:14Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Decentralized Riemannian natural gradient methods with Kronecker-product
approximations [11.263837420265594]
We present an efficient decentralized natural gradient descent (DRNGD) method for solving decentralized manifold optimization problems.
By performing the communications over the Kronecker factors, a high-quality approximation of the RFIM can be obtained in a low cost.
arXiv Detail & Related papers (2023-03-16T19:36:31Z) - DRSOM: A Dimension Reduced Second-Order Method [13.778619250890406]
Under a trust-like framework, our method preserves the convergence of the second-order method while using only information in a few directions.
Theoretically, we show that the method has a local convergence and a global convergence rate of $O(epsilon-3/2)$ to satisfy the first-order and second-order conditions.
arXiv Detail & Related papers (2022-07-30T13:05:01Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - A Novel Fast Exact Subproblem Solver for Stochastic Quasi-Newton Cubic
Regularized Optimization [0.38233569758620045]
We describe an Adaptive Regularization using cubics (ARC) methods for large-scale unconstrained optimization.
We find that our new approach, ARCLQN, compares to moderns with minimal tuning, a common pain-point for second order methods.
arXiv Detail & Related papers (2022-04-19T20:25:29Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - A Block Coordinate Descent-based Projected Gradient Algorithm for
Orthogonal Non-negative Matrix Factorization [0.0]
This article utilizes the projected gradient method (PG) for a non-negative matrix factorization problem (NMF)
We penalise the orthonormality constraints and apply the PG method via a block coordinate descent approach.
arXiv Detail & Related papers (2020-03-23T13:24:43Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.