PHATNet: A Physics-guided Haze Transfer Network for Domain-adaptive Real-world Image Dehazing
- URL: http://arxiv.org/abs/2507.14826v1
- Date: Sun, 20 Jul 2025 05:26:30 GMT
- Title: PHATNet: A Physics-guided Haze Transfer Network for Domain-adaptive Real-world Image Dehazing
- Authors: Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chia-Wen Lin,
- Abstract summary: Previous research has collected paired real-world hazy and haze-free images to improve dehazing models' performance in real-world scenarios.<n>This issue motivates us to develop a flexible domain adaptation method to enhance dehazing performance during testing.<n>We propose the Physics-guided Haze Transfer Network (PHATNet) which transfers haze patterns from unseen target domains to source-domain haze-free images.
- Score: 45.78830437593351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image dehazing aims to remove unwanted hazy artifacts in images. Although previous research has collected paired real-world hazy and haze-free images to improve dehazing models' performance in real-world scenarios, these models often experience significant performance drops when handling unseen real-world hazy images due to limited training data. This issue motivates us to develop a flexible domain adaptation method to enhance dehazing performance during testing. Observing that predicting haze patterns is generally easier than recovering clean content, we propose the Physics-guided Haze Transfer Network (PHATNet) which transfers haze patterns from unseen target domains to source-domain haze-free images, creating domain-specific fine-tuning sets to update dehazing models for effective domain adaptation. Additionally, we introduce a Haze-Transfer-Consistency loss and a Content-Leakage Loss to enhance PHATNet's disentanglement ability. Experimental results demonstrate that PHATNet significantly boosts state-of-the-art dehazing models on benchmark real-world image dehazing datasets.
Related papers
- Learning Unpaired Image Dehazing with Physics-based Rehazy Generation [50.37414006427923]
Overfitting to synthetic training pairs remains a critical challenge in image dehazing.<n>We propose a novel training strategy for unpaired image dehazing, termed Rehazy, to improve both dehazing performance and training stability.
arXiv Detail & Related papers (2025-06-15T12:12:28Z) - Learning Hazing to Dehazing: Towards Realistic Haze Generation for Real-World Image Dehazing [59.43187521828543]
We introduce a novel hazing-dehazing pipeline consisting of a Realistic Hazy Image Generation framework (HazeGen) and a Diffusion-based Dehazing framework (DiffDehaze)<n>HazeGen harnesses robust generative diffusion priors of real-world hazy images embedded in a pre-trained text-to-image diffusion model.<n>By employing specialized hybrid training and blended sampling strategies, HazeGen produces realistic and diverse hazy images as high-quality training data for DiffDehaze.
arXiv Detail & Related papers (2025-03-25T01:55:39Z) - Exploiting Diffusion Prior for Real-World Image Dehazing with Unpaired Training [11.902218695900217]
Unpaired training is one of the most effective paradigms for real scene dehazing by learning from hazy and clear images.<n>Inspired by the strong generative capabilities of diffusion models in producing both hazy and clear images, we exploit diffusion prior for real-world image dehazing.<n>We introduce a new perspective for adequately leveraging the representation ability of diffusion models by removing degradation in image and text modalities.
arXiv Detail & Related papers (2025-03-19T09:13:06Z) - HazeCLIP: Towards Language Guided Real-World Image Dehazing [62.4454483961341]
Existing methods have achieved remarkable performance in image dehazing, particularly on synthetic datasets.<n>This paper introduces HazeCLIP, a language-guided adaptation framework designed to enhance the real-world performance of pre-trained dehazing networks.
arXiv Detail & Related papers (2024-07-18T17:18:25Z) - PANet: A Physics-guided Parametric Augmentation Net for Image Dehazing by Hazing [33.39324790342096]
A huge domain gap between synthetic and real-world haze images degrades dehazing performance in practical settings.
We propose a Physics-guided Parametric Augmentation Network (PANet) that generates photo-realistic hazy and clean training pairs.
Our experimental results demonstrate that PANet can augment diverse realistic hazy images to enrich existing hazy image benchmarks.
arXiv Detail & Related papers (2024-04-14T14:24:13Z) - Source-Free Domain Adaptation for Real-world Image Dehazing [10.26945164141663]
We present a novel Source-Free Unsupervised Domain Adaptation (SFUDA) image dehazing paradigm.
We devise the Domain Representation Normalization (DRN) module to make the representation of real hazy domain features match that of the synthetic domain.
With our plug-and-play DRN module, unlabeled real hazy images can adapt existing well-trained source networks.
arXiv Detail & Related papers (2022-07-14T03:37:25Z) - Mutual Learning for Domain Adaptation: Self-distillation Image Dehazing
Network with Sample-cycle [7.452382358080454]
We propose a mutual learning dehazing framework for domain adaption.
Specifically, we first devise two siamese networks: a teacher network in the synthetic domain and a student network in the real domain.
We show that the framework outperforms state-of-the-art dehazing techniques in terms of subjective and objective evaluation.
arXiv Detail & Related papers (2022-03-17T16:32:14Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.