Texture-guided Saliency Distilling for Unsupervised Salient Object
Detection
- URL: http://arxiv.org/abs/2207.05921v3
- Date: Tue, 9 May 2023 04:21:48 GMT
- Title: Texture-guided Saliency Distilling for Unsupervised Salient Object
Detection
- Authors: Huajun Zhou and Bo Qiao and Lingxiao Yang and Jianhuang Lai and
Xiaohua Xie
- Abstract summary: We propose a novel USOD method to mine rich and accurate saliency knowledge from both easy and hard samples.
Our method achieves state-of-the-art USOD performance on RGB, RGB-D, RGB-T, and video SOD benchmarks.
- Score: 67.10779270290305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning-based Unsupervised Salient Object Detection (USOD) mainly
relies on the noisy saliency pseudo labels that have been generated from
traditional handcraft methods or pre-trained networks. To cope with the noisy
labels problem, a class of methods focus on only easy samples with reliable
labels but ignore valuable knowledge in hard samples. In this paper, we propose
a novel USOD method to mine rich and accurate saliency knowledge from both easy
and hard samples. First, we propose a Confidence-aware Saliency Distilling
(CSD) strategy that scores samples conditioned on samples' confidences, which
guides the model to distill saliency knowledge from easy samples to hard
samples progressively. Second, we propose a Boundary-aware Texture Matching
(BTM) strategy to refine the boundaries of noisy labels by matching the
textures around the predicted boundary. Extensive experiments on RGB, RGB-D,
RGB-T, and video SOD benchmarks prove that our method achieves state-of-the-art
USOD performance.
Related papers
- Learning with Instance-Dependent Noisy Labels by Anchor Hallucination and Hard Sample Label Correction [12.317154103998433]
Traditional Noisy-Label Learning (NLL) methods categorize training data into clean and noisy sets based on the loss distribution of training samples.
Our approach explicitly distinguishes between clean vs.noisy and easy vs. hard samples.
Corrected hard samples, along with the easy samples, are used as labeled data in subsequent semi-supervised training.
arXiv Detail & Related papers (2024-07-10T03:00:14Z) - Mitigating Noisy Supervision Using Synthetic Samples with Soft Labels [13.314778587751588]
Noisy labels are ubiquitous in real-world datasets, especially in the large-scale ones derived from crowdsourcing and web searching.
It is challenging to train deep neural networks with noisy datasets since the networks are prone to overfitting the noisy labels during training.
We propose a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels.
arXiv Detail & Related papers (2024-06-22T04:49:39Z) - Exploiting Low-confidence Pseudo-labels for Source-free Object Detection [54.98300313452037]
Source-free object detection (SFOD) aims to adapt a source-trained detector to an unlabeled target domain without access to the labeled source data.
Current SFOD methods utilize a threshold-based pseudo-label approach in the adaptation phase.
We propose a new approach to take full advantage of pseudo-labels by introducing high and low confidence thresholds.
arXiv Detail & Related papers (2023-10-19T12:59:55Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - Hard Sample Aware Noise Robust Learning for Histopathology Image
Classification [4.75542005200538]
We introduce a novel hard sample aware noise robust learning method for histopathology image classification.
To distinguish the informative hard samples from the harmful noisy ones, we build an easy/hard/noisy (EHN) detection model.
We propose a noise suppressing and hard enhancing (NSHE) scheme to train the noise robust model.
arXiv Detail & Related papers (2021-12-05T11:07:55Z) - Sample Prior Guided Robust Model Learning to Suppress Noisy Labels [8.119439844514973]
We propose PGDF, a novel framework to learn a deep model to suppress noise by generating the samples' prior knowledge.
Our framework can save more informative hard clean samples into the cleanly labeled set.
We evaluate our method using synthetic datasets based on CIFAR-10 and CIFAR-100, as well as on the real-world datasets WebVision and Clothing1M.
arXiv Detail & Related papers (2021-12-02T13:09:12Z) - Boosting Semi-Supervised Face Recognition with Noise Robustness [54.342992887966616]
This paper presents an effective solution to semi-supervised face recognition that is robust to the label noise aroused by the auto-labelling.
We develop a semi-supervised face recognition solution, named Noise Robust Learning-Labelling (NRoLL), which is based on the robust training ability empowered by GN.
arXiv Detail & Related papers (2021-05-10T14:43:11Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.