Spatial-Geometry Enhanced 3D Dynamic Snake Convolutional Neural Network for Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2504.04463v1
- Date: Sun, 06 Apr 2025 12:21:39 GMT
- Title: Spatial-Geometry Enhanced 3D Dynamic Snake Convolutional Neural Network for Hyperspectral Image Classification
- Authors: Guandong Li, Mengxia Ye,
- Abstract summary: Deep neural networks face several challenges in hyperspectral image classification.<n>These include complex and sparse ground object distributions, small clustered structures, and elongated multi-branch features.<n>This paper proposes a Spatial-Geometry Enhanced 3D Dynamic Snake Network (SG-DSCNet) based on an improved 3D-DenseNet model.
- Score: 12.168520751389622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks face several challenges in hyperspectral image classification, including complex and sparse ground object distributions, small clustered structures, and elongated multi-branch features that often lead to missing detections. To better adapt to ground object distributions and achieve adaptive dynamic feature responses while skipping redundant information, this paper proposes a Spatial-Geometry Enhanced 3D Dynamic Snake Network (SG-DSCNet) based on an improved 3D-DenseNet model. The network employs Dynamic Snake Convolution (DSCConv), which introduces deformable offsets to enhance kernel flexibility through constrained self-learning, thereby improving regional perception of ground objects. Additionally, we propose a multi-view feature fusion strategy that generates multiple morphological kernel templates from DSCConv to observe target structures from different perspectives and achieve efficient feature fusion through summarizing key characteristics. This dynamic approach enables the model to focus more flexibly on critical spatial structures when processing different regions, rather than relying on fixed receptive fields of single static kernels. The DSC module enhances model representation capability through dynamic kernel aggregation without increasing network depth or width. Experimental results demonstrate superior performance on the IN, UP, and KSC datasets, outperforming mainstream hyperspectral classification methods.
Related papers
- Dynamic 3D KAN Convolution with Adaptive Grid Optimization for Hyperspectral Image Classification [12.168520751389622]
KANet is an improved 3D-DenseNet model, consisting of 3D KAN Conv and an adaptive grid update mechanism.
KANet enhances model representation capability through a 3D dynamic expert convolution system without increasing network depth or width.
The proposed method demonstrates superior performance on IN, UP, and KSC datasets, outperforming mainstream hyperspectral image classification approaches.
arXiv Detail & Related papers (2025-04-21T14:57:48Z) - Expert Kernel Generation Network Driven by Contextual Mapping for Hyperspectral Image Classification [12.168520751389622]
Deep neural networks face several challenges in hyperspectral image classification, including high-dimensional data, sparse distribution of ground objects, and spectral redundancy.
This paper proposes EKGNet based on an improved 3D-DenseNet model, consisting of a context-aware mapping network and a dynamic kernel generation module.
The proposed method demonstrates superior performance on IN, UP, and KSC datasets, outperforming mainstream hyperspectral image classification approaches.
arXiv Detail & Related papers (2025-04-17T16:00:06Z) - Efficient Dynamic Attention 3D Convolution for Hyperspectral Image Classification [12.168520751389622]
This paper proposes a dynamic attention convolution design based on an improved 3D-DenseNet model.<n>The design employs multiple parallel convolutional kernels instead of a single kernel and assigns dynamic attention weights to these parallel convolutions.<n>The proposed method demonstrates superior performance in both inference speed and accuracy, outperforming mainstream hyperspectral image classification methods on the IN, UP, and KSC datasets.
arXiv Detail & Related papers (2025-03-30T15:12:23Z) - PVAFN: Point-Voxel Attention Fusion Network with Multi-Pooling Enhancing for 3D Object Detection [59.355022416218624]
integration of point and voxel representations is becoming more common in LiDAR-based 3D object detection.
We propose a novel two-stage 3D object detector, called Point-Voxel Attention Fusion Network (PVAFN)
PVAFN uses a multi-pooling strategy to integrate both multi-scale and region-specific information effectively.
arXiv Detail & Related papers (2024-08-26T19:43:01Z) - Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising [54.110544509099526]
Hyperspectral image (HSI) denoising is critical for the effective analysis and interpretation of hyperspectral data.
We propose a hybrid convolution and attention network (HCANet) to enhance HSI denoising.
Experimental results on mainstream HSI datasets demonstrate the rationality and effectiveness of the proposed HCANet.
arXiv Detail & Related papers (2024-03-15T07:18:43Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Spatial-Spectral Hyperspectral Classification based on Learnable 3D
Group Convolution [18.644268589334217]
This paper proposes a learnable group convolution network (LGCNet) based on an improved 3D-DenseNet model and a lightweight model design.
The LGCNet module improves the shortcomings of group convolution by introducing a dynamic learning method for the input channels and convolution kernel grouping.
LGCNet has achieved progress in inference speed and accuracy, and outperforms mainstream hyperspectral image classification methods on the Indian Pines, Pavia University, and KSC datasets.
arXiv Detail & Related papers (2023-07-15T05:47:12Z) - DGCNet: An Efficient 3D-Densenet based on Dynamic Group Convolution for
Hyperspectral Remote Sensing Image Classification [22.025733502296035]
We introduce a lightweight model based on the improved 3D-Densenet model and designs DGCNet.
Multiple groups can capture different and complementary visual and semantic features of input images, allowing convolution neural network(CNN) to learn rich features.
The inference speed and accuracy have been improved, with outstanding performance on the IN, Pavia and KSC datasets.
arXiv Detail & Related papers (2023-07-13T10:19:48Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - Anisotropic Multi-Scale Graph Convolutional Network for Dense Shape
Correspondence [3.45989531033125]
This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics.
We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features.
The resulting correspondence maps show state-of-the-art performance on the benchmark datasets.
arXiv Detail & Related papers (2022-10-17T22:40:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.