Source-Free Domain Adaptation for Real-world Image Dehazing
- URL: http://arxiv.org/abs/2207.06644v1
- Date: Thu, 14 Jul 2022 03:37:25 GMT
- Title: Source-Free Domain Adaptation for Real-world Image Dehazing
- Authors: Hu Yu, Jie Huang, Yajing Liu, Qi Zhu, Man Zhou, Feng Zhao
- Abstract summary: We present a novel Source-Free Unsupervised Domain Adaptation (SFUDA) image dehazing paradigm.
We devise the Domain Representation Normalization (DRN) module to make the representation of real hazy domain features match that of the synthetic domain.
With our plug-and-play DRN module, unlabeled real hazy images can adapt existing well-trained source networks.
- Score: 10.26945164141663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based source dehazing methods trained on synthetic datasets
have achieved remarkable performance but suffer from dramatic performance
degradation on real hazy images due to domain shift. Although certain Domain
Adaptation (DA) dehazing methods have been presented, they inevitably require
access to the source dataset to reduce the gap between the source synthetic and
target real domains. To address these issues, we present a novel Source-Free
Unsupervised Domain Adaptation (SFUDA) image dehazing paradigm, in which only a
well-trained source model and an unlabeled target real hazy dataset are
available. Specifically, we devise the Domain Representation Normalization
(DRN) module to make the representation of real hazy domain features match that
of the synthetic domain to bridge the gaps. With our plug-and-play DRN module,
unlabeled real hazy images can adapt existing well-trained source networks.
Besides, the unsupervised losses are applied to guide the learning of the DRN
module, which consists of frequency losses and physical prior losses. Frequency
losses provide structure and style constraints, while the prior loss explores
the inherent statistic property of haze-free images. Equipped with our DRN
module and unsupervised loss, existing source dehazing models are able to
dehaze unlabeled real hazy images. Extensive experiments on multiple baselines
demonstrate the validity and superiority of our method visually and
quantitatively.
Related papers
- Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data [66.49494950674402]
We leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images.
We build an efficient and easily scalable pipeline to generate thousands of post-disaster images from low-resource domains.
We validate the strength of our proposed framework under cross-geography domain transfer setting from xBD and SKAI images in both single-source and multi-source settings.
arXiv Detail & Related papers (2024-05-22T16:07:05Z) - Source-free Domain Adaptive Object Detection in Remote Sensing Images [11.19538606490404]
We propose a source-free object detection (SFOD) setting for RS images.
It aims to perform target domain adaptation using only the source pre-trained model.
Our method does not require access to source domain RS images.
arXiv Detail & Related papers (2024-01-31T15:32:44Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Enhancing Visual Domain Adaptation with Source Preparation [5.287588907230967]
Domain Adaptation techniques fail to consider the characteristics of the source domain itself.
We propose Source Preparation (SP), a method to mitigate source domain biases.
We show that SP enhances UDA across a range of visual domains, with improvements up to 40.64% in mIoU over baseline.
arXiv Detail & Related papers (2023-06-16T18:56:44Z) - SF-FSDA: Source-Free Few-Shot Domain Adaptive Object Detection with
Efficient Labeled Data Factory [94.11898696478683]
Domain adaptive object detection aims to leverage the knowledge learned from a labeled source domain to improve the performance on an unlabeled target domain.
We propose and investigate a more practical and challenging domain adaptive object detection problem under both source-free and few-shot conditions, named as SF-FSDA.
arXiv Detail & Related papers (2023-06-07T12:34:55Z) - ReContrast: Domain-Specific Anomaly Detection via Contrastive
Reconstruction [29.370142078092375]
Most advanced unsupervised anomaly detection (UAD) methods rely on modeling feature representations of frozen encoder networks pre-trained on large-scale datasets.
We propose a novel epistemic UAD method, namely ReContrast, which optimize the entire network to reduce biases towards the pre-trained image domain.
We conduct experiments across two popular industrial defect detection benchmarks and three medical image UAD tasks, which shows our superiority over current state-of-the-art methods.
arXiv Detail & Related papers (2023-06-05T05:21:15Z) - Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer [60.31021888394358]
Unsupervised Domain Adaptation (UDA) can effectively address domain gap issues in real-world image Super-Resolution (SR)
We propose a SOurce-free Domain Adaptation framework for image SR (SODA-SR) to address this issue, i.e., adapt a source-trained model to a target domain with only unlabeled target data.
arXiv Detail & Related papers (2023-03-31T03:14:44Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.