Curricular Contrastive Regularization for Physics-aware Single Image
Dehazing
- URL: http://arxiv.org/abs/2303.14218v2
- Date: Thu, 1 Jun 2023 13:36:30 GMT
- Title: Curricular Contrastive Regularization for Physics-aware Single Image
Dehazing
- Authors: Yu Zheng, Jiahui Zhan, Shengfeng He, Junyu Dong, and Yong Du
- Abstract summary: We propose a novel curricular contrastive regularization targeted at a consensual contrastive space as opposed to a non-consensual one.
Our negatives, which provide better lower-bound constraints, can be assembled from 1) the hazy image, and 2) corresponding restorations by other existing methods.
With the unit, as well as curricular contrastive regularization, we establish our dehazing network, named C2PNet.
- Score: 56.392696439577165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Considering the ill-posed nature, contrastive regularization has been
developed for single image dehazing, introducing the information from negative
images as a lower bound. However, the contrastive samples are nonconsensual, as
the negatives are usually represented distantly from the clear (i.e., positive)
image, leaving the solution space still under-constricted. Moreover, the
interpretability of deep dehazing models is underexplored towards the physics
of the hazing process. In this paper, we propose a novel curricular contrastive
regularization targeted at a consensual contrastive space as opposed to a
non-consensual one. Our negatives, which provide better lower-bound
constraints, can be assembled from 1) the hazy image, and 2) corresponding
restorations by other existing methods. Further, due to the different
similarities between the embeddings of the clear image and negatives, the
learning difficulty of the multiple components is intrinsically imbalanced. To
tackle this issue, we customize a curriculum learning strategy to reweight the
importance of different negatives. In addition, to improve the interpretability
in the feature space, we build a physics-aware dual-branch unit according to
the atmospheric scattering model. With the unit, as well as curricular
contrastive regularization, we establish our dehazing network, named C2PNet.
Extensive experiments demonstrate that our C2PNet significantly outperforms
state-of-the-art methods, with extreme PSNR boosts of 3.94dB and 1.50dB,
respectively, on SOTS-indoor and SOTS-outdoor datasets.
Related papers
- When hard negative sampling meets supervised contrastive learning [17.173114048398947]
We introduce a new supervised contrastive learning objective, SCHaNe, which incorporates hard negative sampling during the fine-tuning phase.
SchaNe outperforms the strong baseline BEiT-3 in Top-1 accuracy across various benchmarks.
Our proposed objective sets a new state-of-the-art for base models on ImageNet-1k, achieving an 86.14% accuracy.
arXiv Detail & Related papers (2023-08-28T20:30:10Z) - Deep Intra-Image Contrastive Learning for Weakly Supervised One-Step
Person Search [98.2559247611821]
We present a novel deep intra-image contrastive learning using a Siamese network.
Our method achieves a state-of-the-art performance among weakly supervised one-step person search approaches.
arXiv Detail & Related papers (2023-02-09T12:45:20Z) - CbwLoss: Constrained Bidirectional Weighted Loss for Self-supervised
Learning of Depth and Pose [13.581694284209885]
Photometric differences are used to train neural networks for estimating depth and camera pose from unlabeled monocular videos.
In this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis.
We mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks.
arXiv Detail & Related papers (2022-12-12T12:18:24Z) - An Embedding-Dynamic Approach to Self-supervised Learning [8.714677279673738]
We treat the embeddings of images as point particles and consider model optimization as a dynamic process on this system of particles.
Our dynamic model combines an attractive force for similar images, a locally dispersive force to avoid local collapse, and a global dispersive force to achieve a globally-homogeneous distribution of particles.
arXiv Detail & Related papers (2022-07-07T19:56:20Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Unpaired Deep Image Dehazing Using Contrastive Disentanglement Learning [36.24651058888557]
We present an effective unpaired learning based image dehazing network from an unpaired set of clear and hazy images.
We show that our method performs favorably against existing state-of-the-art unpaired dehazing approaches.
arXiv Detail & Related papers (2022-03-15T06:45:03Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Semantically Contrastive Learning for Low-light Image Enhancement [48.71522073014808]
Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images.
We propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE)
Our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets.
arXiv Detail & Related papers (2021-12-13T07:08:33Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.