Nearest Neighbor-Based Contrastive Learning for Hyperspectral and LiDAR
Data Classification
- URL: http://arxiv.org/abs/2301.03335v1
- Date: Mon, 9 Jan 2023 13:43:54 GMT
- Title: Nearest Neighbor-Based Contrastive Learning for Hyperspectral and LiDAR
Data Classification
- Authors: Meng Wang, Feng Gao, Junyu Dong, Heng-Chao Li, Qian Du
- Abstract summary: We propose a Nearest Neighbor-based Contrastive Learning Network (NNCNet) to learn discriminative feature representations.
Specifically, we propose a nearest neighbor-based data augmentation scheme to use enhanced semantic relationships among nearby regions.
In addition, we design a bilinear attention module to exploit the second-order and even high-order feature interactions between the HSI and LiDAR data.
- Score: 45.026868970899514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The joint hyperspectral image (HSI) and LiDAR data classification aims to
interpret ground objects at more detailed and precise level. Although deep
learning methods have shown remarkable success in the multisource data
classification task, self-supervised learning has rarely been explored. It is
commonly nontrivial to build a robust self-supervised learning model for
multisource data classification, due to the fact that the semantic similarities
of neighborhood regions are not exploited in existing contrastive learning
framework. Furthermore, the heterogeneous gap induced by the inconsistent
distribution of multisource data impedes the classification performance. To
overcome these disadvantages, we propose a Nearest Neighbor-based Contrastive
Learning Network (NNCNet), which takes full advantage of large amounts of
unlabeled data to learn discriminative feature representations. Specifically,
we propose a nearest neighbor-based data augmentation scheme to use enhanced
semantic relationships among nearby regions. The intermodal semantic alignments
can be captured more accurately. In addition, we design a bilinear attention
module to exploit the second-order and even high-order feature interactions
between the HSI and LiDAR data. Extensive experiments on four public datasets
demonstrate the superiority of our NNCNet over state-of-the-art methods. The
source codes are available at \url{https://github.com/summitgao/NNCNet}.
Related papers
- Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - DNA: Denoised Neighborhood Aggregation for Fine-grained Category
Discovery [25.836440772705505]
We propose a self-supervised framework that encodes semantic structures of data into the embedding space.
We retrieve k-nearest neighbors of a query as its positive keys to capture semantic similarities between data and then aggregate information from the neighbors to learn compact cluster representations.
Our method can retrieve more accurate neighbors (21.31% accuracy improvement) and outperform state-of-the-art models by a large margin.
arXiv Detail & Related papers (2023-10-16T07:43:30Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Voxel-wise Adversarial Semi-supervised Learning for Medical Image
Segmentation [4.489713477369384]
We introduce a novel adversarial learning-based semi-supervised segmentation method for medical image segmentation.
Our method embeds both local and global features from multiple hidden layers and learns context relations between multiple classes.
Our method outperforms current best-performing state-of-the-art semi-supervised learning approaches on the image segmentation of the left atrium (single class) and multiorgan datasets (multiclass)
arXiv Detail & Related papers (2022-05-14T06:57:19Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Learning Robust Representation for Clustering through Locality
Preserving Variational Discriminative Network [16.259673823482665]
Variational Deep Embedding achieves great success in various clustering tasks.
VaDE suffers from two problems: 1) it is fragile to the input noise; 2) it ignores the locality information between the neighboring data points.
We propose a joint learning framework that improves VaDE with a robust embedding discriminator and a local structure constraint.
arXiv Detail & Related papers (2020-12-25T02:31:55Z) - Contextual Diversity for Active Learning [9.546771465714876]
Large datasets restrict the use of deep convolutional neural networks (CNNs) for many practical applications.
We introduce the notion of contextual diversity that captures the confusion associated with spatially co-occurring classes.
Our studies show clear advantages of using contextual diversity for active learning.
arXiv Detail & Related papers (2020-08-13T07:04:15Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.