Riemannian Complex Matrix Convolution Network for PolSAR Image
Classification
- URL: http://arxiv.org/abs/2312.03378v1
- Date: Wed, 6 Dec 2023 09:33:33 GMT
- Title: Riemannian Complex Matrix Convolution Network for PolSAR Image
Classification
- Authors: Junfei Shi and Wei Wang and Haiyan Jin and Mengmeng Nie and Shanshan
Ji
- Abstract summary: Existing deep learning methods learn PolSAR data by converting the covariance matrix into a feature vector or complex-valued vector as the input.
To learn geometric structure of complex matrix, we propose a Riemannian complex matrix convolution network for PolSAR image classification.
Experiments are conducted on three sets of real PolSAR data with different bands and sensors.
- Score: 6.958028708925819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning methods have achieved superior performance for
Polarimetric Synthetic Aperture Radar(PolSAR) image classification. Existing
deep learning methods learn PolSAR data by converting the covariance matrix
into a feature vector or complex-valued vector as the input. However, all these
methods cannot learn the structure of complex matrix directly and destroy the
channel correlation. To learn geometric structure of complex matrix, we propose
a Riemannian complex matrix convolution network for PolSAR image classification
in Riemannian space for the first time, which directly utilizes the complex
matrix as the network input and defines the Riemannian operations to learn
complex matrix's features. The proposed Riemannian complex matrix convolution
network considers PolSAR complex matrix endowed in Riemannian manifold, and
defines a series of new Riemannian convolution, ReLu and LogEig operations in
Riemannian space, which breaks through the Euclidean constraint of conventional
networks. Then, a CNN module is appended to enhance contextual Riemannian
features. Besides, a fast kernel learning method is developed for the proposed
method to learn class-specific features and reduce the computation time
effectively. Experiments are conducted on three sets of real PolSAR data with
different bands and sensors. Experiments results demonstrates the proposed
method can obtain superior performance than the state-of-the-art methods.
Related papers
- Riemannian Complex Hermit Positive Definite Convolution Network for Polarimetric SAR Image Classification [42.353289630062555]
Deep learning can learn semantic features in Euclidean space effectively for PolSAR images.
They need to covert the complex covariance matrix into a feature vector or complex-valued vector as the network input.
We propose a complex HPD unfolding network and a CV-3DCNN enhanced network to learn complex HPD matrices directly.
arXiv Detail & Related papers (2025-02-12T05:41:25Z) - Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
This paper provides a comprehensive and unified understanding of the matrix logarithm and power from a Riemannian geometry perspective.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - FORML: A Riemannian Hessian-free Method for Meta-learning on Stiefel Manifolds [4.757859522106933]
This paper introduces a Hessian-free approach that uses a first-order approximation of derivatives on the Stiefel manifold.
Our method significantly reduces the computational load and memory footprint.
arXiv Detail & Related papers (2024-02-28T10:57:30Z) - Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares [0.8702432681310401]
We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
arXiv Detail & Related papers (2023-06-08T06:35:47Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - Classification of BCI-EEG based on augmented covariance matrix [0.0]
We propose a new framework based on the augmented covariance extracted from an autoregressive model to improve motor imagery classification.
We will test our approach on several datasets and several subjects using the MOABB framework.
arXiv Detail & Related papers (2023-02-09T09:04:25Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - Riemannian Nearest-Regularized Subspace Classification for Polarimetric
SAR images [0.0]
The proposed method can outperform the state-of-art algorithms even using less features.
A new Tikhonov regularization term is designed to reduce the differences within the same class.
arXiv Detail & Related papers (2022-01-02T11:21:59Z) - FLAMBE: Structural Complexity and Representation Learning of Low Rank
MDPs [53.710405006523274]
This work focuses on the representation learning question: how can we learn such features?
Under the assumption that the underlying (unknown) dynamics correspond to a low rank transition matrix, we show how the representation learning question is related to a particular non-linear matrix decomposition problem.
We develop FLAMBE, which engages in exploration and representation learning for provably efficient RL in low rank transition models.
arXiv Detail & Related papers (2020-06-18T19:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.