Class Prior-Free Positive-Unlabeled Learning with Taylor Variational
Loss for Hyperspectral Remote Sensing Imagery
- URL: http://arxiv.org/abs/2308.15081v1
- Date: Tue, 29 Aug 2023 07:29:30 GMT
- Title: Class Prior-Free Positive-Unlabeled Learning with Taylor Variational
Loss for Hyperspectral Remote Sensing Imagery
- Authors: Hengwei Zhao, Xinyu Wang, Jingtao Li, Yanfei Zhong
- Abstract summary: Positive-unlabeled learning (PU learning) in hyperspectral remote sensing imagery (HSI) is aimed at learning a binary classifier from positive and unlabeled data.
In this paper, a Taylor variational loss is proposed for HSI PU learning, which reduces the weight of the gradient of the unlabeled data.
Experiments on 7 benchmark datasets (21 tasks in total) validate the effectiveness of the proposed method.
- Score: 12.54504113062557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Positive-unlabeled learning (PU learning) in hyperspectral remote sensing
imagery (HSI) is aimed at learning a binary classifier from positive and
unlabeled data, which has broad prospects in various earth vision applications.
However, when PU learning meets limited labeled HSI, the unlabeled data may
dominate the optimization process, which makes the neural networks overfit the
unlabeled data. In this paper, a Taylor variational loss is proposed for HSI PU
learning, which reduces the weight of the gradient of the unlabeled data by
Taylor series expansion to enable the network to find a balance between
overfitting and underfitting. In addition, the self-calibrated optimization
strategy is designed to stabilize the training process. Experiments on 7
benchmark datasets (21 tasks in total) validate the effectiveness of the
proposed method. Code is at: https://github.com/Hengwei-Zhao96/T-HOneCls.
Related papers
- Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning [71.9954600831939]
Positive-Unlabeled (PU) learning is vital in many real-world scenarios, but its application to graph data remains under-explored.
We unveil that a critical challenge for PU learning on graph lies on the edge heterophily, which directly violates the irreducibility assumption for Class-Prior Estimation.
In response to this challenge, we introduce a new method, named Graph PU Learning with Label Propagation Loss (GPL)
arXiv Detail & Related papers (2024-05-30T10:30:44Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Revisiting Class Imbalance for End-to-end Semi-Supervised Object
Detection [1.6249267147413524]
Semi-supervised object detection (SSOD) has made significant progress with the development of pseudo-label-based end-to-end methods.
Many methods face challenges due to class imbalance, which hinders the effectiveness of the pseudo-label generator.
In this paper, we examine the root causes of low-quality pseudo-labels and present novel learning mechanisms to improve the label generation quality.
arXiv Detail & Related papers (2023-06-04T06:01:53Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Relieving the Plateau: Active Semi-Supervised Learning for a Better
Landscape [2.3046646540823916]
Semi-supervised learning (SSL) leverages unlabeled data that are more accessible than their labeled counterparts.
Active learning (AL) selects unlabeled instances to be annotated by a human-in-the-loop in hopes of better performance with less labeled data.
We propose convergence rate control (CRC), an AL algorithm that selects unlabeled data to improve the problem conditioning upon inclusion to the labeled set.
arXiv Detail & Related papers (2021-04-08T06:03:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.