Contrastive Semi-supervised Learning for Underwater Image Restoration
via Reliable Bank
- URL: http://arxiv.org/abs/2303.09101v4
- Date: Tue, 4 Apr 2023 08:19:34 GMT
- Title: Contrastive Semi-supervised Learning for Underwater Image Restoration
via Reliable Bank
- Authors: Shirui Huang, Keyan Wang, Huan Liu, Jun Chen and Yunsong Li
- Abstract summary: We propose a Semi-supervised Underwater Image Restoration (Semi-UIR) framework to incorporate the unlabeled data into network training.
We first introduce a reliable bank to store the "best-ever" outputs as pseudo ground truth.
Experimental results on both full-reference and non-reference underwater benchmarks demonstrate that our algorithm has obvious improvement over SOTA methods.
- Score: 38.46437948000374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable achievement of recent underwater image restoration
techniques, the lack of labeled data has become a major hurdle for further
progress. In this work, we propose a mean-teacher based Semi-supervised
Underwater Image Restoration (Semi-UIR) framework to incorporate the unlabeled
data into network training. However, the naive mean-teacher method suffers from
two main problems: (1) The consistency loss used in training might become
ineffective when the teacher's prediction is wrong. (2) Using L1 distance may
cause the network to overfit wrong labels, resulting in confirmation bias. To
address the above problems, we first introduce a reliable bank to store the
"best-ever" outputs as pseudo ground truth. To assess the quality of outputs,
we conduct an empirical analysis based on the monotonicity property to select
the most trustworthy NR-IQA method. Besides, in view of the confirmation bias
problem, we incorporate contrastive regularization to prevent the overfitting
on wrong labels. Experimental results on both full-reference and non-reference
underwater benchmarks demonstrate that our algorithm has obvious improvement
over SOTA methods quantitatively and qualitatively. Code has been released at
https://github.com/Huang-ShiRui/Semi-UIR.
Related papers
- Cross-Domain Underwater Image Enhancement Guided by No-Reference Image Quality Assessment: A Transfer Learning Approach [5.324625330944038]
Single underwater image enhancement (UIE) is a challenging problem, but its development is hindered by two major issues.
The labels in underwater reference datasets are pseudo labels, relying on these pseudo ground truths in supervised learning leads to domain discrepancy.
We propose Trans-UIE, a transfer learning-based UIE model that captures the fundamental paradigms of UIE through pretraining.
arXiv Detail & Related papers (2025-03-23T04:40:07Z) - Unsupervised Dense Retrieval with Relevance-Aware Contrastive
Pre-Training [81.3781338418574]
We propose relevance-aware contrastive learning.
We consistently improve the SOTA unsupervised Contriever model on the BEIR and open-domain QA retrieval benchmarks.
Our method can not only beat BM25 after further pre-training on the target corpus but also serves as a good few-shot learner.
arXiv Detail & Related papers (2023-06-05T18:20:27Z) - Cross Modal Distillation for Flood Extent Mapping [0.41562334038629595]
We explore ML techniques that improve the flood detection module of an operational early flood warning system.
Our method exploits an unlabelled dataset of paired multi-spectral and Synthetic Aperture Radar (SAR) imagery.
arXiv Detail & Related papers (2023-02-16T09:57:08Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Improving Localization for Semi-Supervised Object Detection [3.5493798890908104]
We introduce an additional classification task for bounding box localization to improve the filtering of predicted bounding boxes.
Our experiments show that our IL-net increases SSOD performance by 1.14% AP on dataset in limited-annotation regime.
arXiv Detail & Related papers (2022-06-21T08:39:38Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - Unbiased Teacher for Semi-Supervised Object Detection [50.0087227400306]
We revisit the Semi-Supervised Object Detection (SS-OD) and identify the pseudo-labeling bias issue in SS-OD.
We introduce Unbiased Teacher, a simple yet effective approach that jointly trains a student and a gradually progressing teacher in a mutually-beneficial manner.
arXiv Detail & Related papers (2021-02-18T17:02:57Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.