False: False Negative Samples Aware Contrastive Learning for Semantic
Segmentation of High-Resolution Remote Sensing Image
- URL: http://arxiv.org/abs/2211.07928v1
- Date: Tue, 15 Nov 2022 06:24:26 GMT
- Title: False: False Negative Samples Aware Contrastive Learning for Semantic
Segmentation of High-Resolution Remote Sensing Image
- Authors: Zhaoyang Zhang, Xuying Wang, Xiaoming Mei, Chao Tao, Haifeng Li
- Abstract summary: We propose a False negAtive sampLes aware contraStive lEarning model (FALSE) for the semantic segmentation of high-resolution RSIs.
Since the SSCL pretraining is unsupervised, the lack of definable criteria for false negative sample (FNS) leads to theoretical undecidability.
We achieve coarse determination of FNS by the FNS self-determination strategy and achieve calibration of FNS by the FNS confidence calibration (FNCC) loss function.
- Score: 11.356381535900901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing SSCL of RSI is built based on constructing positive and negative
sample pairs. However, due to the richness of RSI ground objects and the
complexity of the RSI contextual semantics, the same RSI patches have the
coexistence and imbalance of positive and negative samples, which causing the
SSCL pushing negative samples far away while pushing positive samples far away,
and vice versa. We call this the sample confounding issue (SCI). To solve this
problem, we propose a False negAtive sampLes aware contraStive lEarning model
(FALSE) for the semantic segmentation of high-resolution RSIs. Since the SSCL
pretraining is unsupervised, the lack of definable criteria for false negative
sample (FNS) leads to theoretical undecidability, we designed two steps to
implement the FNS approximation determination: coarse determination of FNS and
precise calibration of FNS. We achieve coarse determination of FNS by the FNS
self-determination (FNSD) strategy and achieve calibration of FNS by the FNS
confidence calibration (FNCC) loss function. Experimental results on three RSI
semantic segmentation datasets demonstrated that the FALSE effectively improves
the accuracy of the downstream RSI semantic segmentation task compared with the
current three models, which represent three different types of SSCL models. The
mean Intersection-over-Union on ISPRS Potsdam dataset is improved by 0.7\% on
average; on CVPR DGLC dataset is improved by 12.28\% on average; and on
Xiangtan dataset this is improved by 1.17\% on average. This indicates that the
SSCL model has the ability to self-differentiate FNS and that the FALSE
effectively mitigates the SCI in self-supervised contrastive learning. The
source code is available at https://github.com/GeoX-Lab/FALSE.
Related papers
- Investigating the Semantic Robustness of CLIP-based Zero-Shot Anomaly Segmentation [2.722220619798093]
We investigate the performance of a zero-shot anomaly segmentation algorithm by perturbing test data using three semantic transformations.
We find that performance is consistently lowered on three CLIP backbones, regardless of model architecture or learning objective.
arXiv Detail & Related papers (2024-05-13T17:47:08Z) - Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity [54.145730036889496]
This paper deals with Gradient learning (FL) in the presence of malicious attacks Byzantine data.
A novel Average Algorithm (RAGA) is proposed, which leverages robustness aggregation and can select a dataset.
arXiv Detail & Related papers (2024-03-20T08:15:08Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - GraSS: Contrastive Learning with Gradient Guided Sampling Strategy for
Remote Sensing Image Semantic Segmentation [14.750062497258147]
We propose contrastive learning with Gradient guided Sampling Strategy (GraSS) for RSI semantic segmentation.
GraSS consists of two stages: Instance Discrimination warm-up and Gradient guided Sampling contrastive training.
GraSS effectively enhances the performance of SSCL in high-resolution RSI semantic segmentation.
arXiv Detail & Related papers (2023-06-28T01:50:46Z) - Triplet Loss-less Center Loss Sampling Strategies in Facial Expression
Recognition Scenarios [5.672538282456803]
Deep neural network (DNN) accompanied by deep metric learning (DML) techniques boost the discriminative ability of the model inFER applications.
We developed three strategies: fully-synthesized, semi-synthesized, and prediction-based negative sample selection strategies.
To achieve better results, we introduce a selective attention module that provides a combination of pixel-wise and element-wise attention coefficients.
arXiv Detail & Related papers (2023-02-08T15:03:36Z) - Jensen-Shannon Divergence Based Novel Loss Functions for Bayesian Neural Networks [2.4554686192257424]
We formulate a novel loss function for BNNs based on a new modification to the generalized Jensen-Shannon (JS) divergence, which is bounded.
We find that the JS divergence-based variational inference is intractable, and hence employed a constrained optimization framework to formulate these losses.
Our theoretical analysis and empirical experiments on multiple regression and classification data sets suggest that the proposed losses perform better than the KL divergence-based loss, especially when the data sets are noisy or biased.
arXiv Detail & Related papers (2022-09-23T01:47:09Z) - A Hybrid Deep Learning Model-based Remaining Useful Life Estimation for
Reed Relay with Degradation Pattern Clustering [12.631122036403864]
Reed relay serves as the fundamental component of functional testing, which closely relates to the successful quality inspection of electronics.
To provide accurate remaining useful life (RUL) estimation for reed relay, a hybrid deep learning network with degradation pattern clustering is proposed.
arXiv Detail & Related papers (2022-09-14T05:45:46Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - OpenMatch: Open-set Consistency Regularization for Semi-supervised
Learning with Outliers [71.08167292329028]
We propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch.
OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers.
It achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
arXiv Detail & Related papers (2021-05-28T23:57:15Z) - Semi-supervised Contrastive Learning with Similarity Co-calibration [72.38187308270135]
We propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL)
SsCL combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning.
We show that SsCL produces more discriminative representation and is beneficial to few shot learning.
arXiv Detail & Related papers (2021-05-16T09:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.