Level-aware Haze Image Synthesis by Self-Supervised Content-Style
Disentanglement
- URL: http://arxiv.org/abs/2103.06501v1
- Date: Thu, 11 Mar 2021 06:53:18 GMT
- Title: Level-aware Haze Image Synthesis by Self-Supervised Content-Style
Disentanglement
- Authors: Chi Zhang, Zihang Lin, Liheng Xu, Zongliang Li, Le Wang, Yuehu Liu,
Gaofeng Meng, Li Li, and Nanning Zheng
- Abstract summary: Key procedure of haze image translation through adversarial training lies in the disentanglement between the feature only involved in haze synthesis, i.e.style feature, and the feature representing the invariant semantic content, i.e. content feature.
- Score: 56.99803235546565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The key procedure of haze image translation through adversarial training lies
in the disentanglement between the feature only involved in haze synthesis,
i.e.style feature, and the feature representing the invariant semantic content,
i.e. content feature. Previous methods separate content feature apart by
utilizing it to classify haze image during the training process. However, in
this paper we recognize the incompleteness of the content-style disentanglement
in such technical routine. The flawed style feature entangled with content
information inevitably leads the ill-rendering of the haze images. To address,
we propose a self-supervised style regression via stochastic linear
interpolation to reduce the content information in style feature. The ablative
experiments demonstrate the disentangling completeness and its superiority in
level-aware haze image synthesis. Moreover, the generated haze data are applied
in the testing generalization of vehicle detectors. Further study between
haze-level and detection performance shows that haze has obvious impact on the
generalization of the vehicle detectors and such performance degrading level is
linearly correlated to the haze-level, which, in turn, validates the
effectiveness of the proposed method.
Related papers
- ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - ODCR: Orthogonal Decoupling Contrastive Regularization for Unpaired Image Dehazing [2.5944091779488123]
Unrelated image dehazing (UID) holds significant research importance due to the challenges in acquiring haze/clear image pairs with identical backgrounds.
This paper proposes a novel method for UID named Orthogonal Decoupling Contrastive Regularization (ODCR)
arXiv Detail & Related papers (2024-04-27T08:13:13Z) - PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering [13.785484396436367]
We formulate image composition as a subject-based local editing task, solely focusing on foreground generation.
We propose PrimeComposer, a faster training-free diffuser that composites the images by well-designed attention steering across different noise levels.
Our method exhibits the fastest inference efficiency and extensive experiments demonstrate our superiority both qualitatively and quantitatively.
arXiv Detail & Related papers (2024-03-08T04:58:49Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Single Stage Virtual Try-on via Deformable Attention Flows [51.70606454288168]
Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image.
We develop a novel Deformable Attention Flow (DAFlow) which applies the deformable attention scheme to multi-flow estimation.
Our proposed method achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-07-19T10:01:31Z) - Improving the Latent Space of Image Style Transfer [24.37383949267162]
In some cases, the feature statistics from the pre-trained encoder may not be consistent with the visual style we perceived.
In such an inappropriate latent space, the objective function of the existing methods will be optimized in the wrong direction.
We propose two contrastive training schemes to get a refined encoder that is more suitable for this task.
arXiv Detail & Related papers (2022-05-24T15:13:01Z) - Retrieval-based Spatially Adaptive Normalization for Semantic Image
Synthesis [68.1281982092765]
We propose a novel normalization module, termed as REtrieval-based Spatially AdaptIve normaLization (RESAIL)
RESAIL provides pixel level fine-grained guidance to the normalization architecture.
Experiments on several challenging datasets show that our RESAIL performs favorably against state-of-the-arts in terms of quantitative metrics, visual quality, and subjective evaluation.
arXiv Detail & Related papers (2022-04-06T14:21:39Z) - A Framework using Contrastive Learning for Classification with Noisy
Labels [1.2891210250935146]
We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels.
Recent strategies such as pseudo-labeling, sample selection with Gaussian Mixture models, weighted supervised contrastive learning have been combined into a fine-tuning phase following the pre-training.
arXiv Detail & Related papers (2021-04-19T18:51:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.