Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation
- URL: http://arxiv.org/abs/2307.08779v3
- Date: Sun, 5 Nov 2023 05:55:26 GMT
- Title: Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation
- Authors: Rundong Luo, Wenjing Wang, Wenhan Yang, Jiaying Liu
- Abstract summary: Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
- Score: 52.923298434948606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-light conditions not only hamper human visual experience but also degrade
the model's performance on downstream vision tasks. While existing works make
remarkable progress on day-night domain adaptation, they rely heavily on domain
knowledge derived from the task-specific nighttime dataset. This paper
challenges a more complicated scenario with border applicability, i.e.,
zero-shot day-night domain adaptation, which eliminates reliance on any
nighttime data. Unlike prior zero-shot adaptation approaches emphasizing either
image-level translation or model-level adaptation, we propose a similarity
min-max paradigm that considers them under a unified framework. On the image
level, we darken images towards minimum feature similarity to enlarge the
domain gap. Then on the model level, we maximize the feature similarity between
the darkened images and their normal-light counterparts for better model
adaptation. To the best of our knowledge, this work represents the pioneering
effort in jointly optimizing both aspects, resulting in a significant
improvement of model generalizability. Extensive experiments demonstrate our
method's effectiveness and broad applicability on various nighttime vision
tasks, including classification, semantic segmentation, visual place
recognition, and video action recognition. Code and pre-trained models are
available at https://red-fairy.github.io/ZeroShotDayNightDA-Webpage/.
Related papers
- Night-to-Day Translation via Illumination Degradation Disentanglement [51.77716565167767]
Night-to-Day translation aims to achieve day-like vision for nighttime scenes.
processing night images with complex degradations remains a significant challenge under unpaired conditions.
We propose textbfN2D3 to identify different degradation patterns in nighttime images.
arXiv Detail & Related papers (2024-11-21T08:51:32Z) - Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - Appearance Codes using Joint Embedding Learning of Multiple Modalities [0.0]
A major limitation of this technique is the need to re-train new appearance codes for every scene on inference.
We propose a framework that learns a joint embedding space for the appearance and structure of the scene by enforcing a contrastive loss constraint between different modalities.
We show that our approach achieves generations of similar quality without learning appearance codes for any unseen images on inference.
arXiv Detail & Related papers (2023-11-19T21:24:34Z) - Bilevel Fast Scene Adaptation for Low-Light Image Enhancement [50.639332885989255]
Enhancing images in low-light scenes is a challenging but widely concerned task in the computer vision.
Main obstacle lies in the modeling conundrum from distribution discrepancy across different scenes.
We introduce the bilevel paradigm to model the above latent correspondence.
A bilevel learning framework is constructed to endow the scene-irrelevant generality of the encoder towards diverse scenes.
arXiv Detail & Related papers (2023-06-02T08:16:21Z) - Cross-Domain Correlation Distillation for Unsupervised Domain Adaptation
in Nighttime Semantic Segmentation [17.874336775904272]
We propose a novel domain adaptation framework via cross-domain correlation distillation, called CCDistill.
We extract the content and style knowledge contained in features, calculate the degree of inherent or illumination difference between two images.
Experiments on Dark Zurich and ACDC demonstrate that CCDistill achieves the state-of-the-art performance for nighttime semantic segmentation.
arXiv Detail & Related papers (2022-05-02T12:42:04Z) - Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic
Segmentation [83.97914777313136]
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial.
In this paper, we study the problem of Domain Adaptive Nighttime Semantic (DANSS), which aims to learn a discriminative nighttime model.
We propose a novel Bi-Mix framework for DANSS, which can contribute to both image translation and segmentation adaptation processes.
arXiv Detail & Related papers (2021-11-19T17:39:47Z) - PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency
Training [4.336877104987131]
Unsupervised domain adaptation is a promising technique for semantic segmentation.
We present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training.
Our approach is simpler, easier to implement, and more memory-efficient during training.
arXiv Detail & Related papers (2021-05-17T19:36:28Z) - Co-Attention for Conditioned Image Matching [91.43244337264454]
We propose a new approach to determine correspondences between image pairs in the wild under large changes in illumination, viewpoint, context, and material.
While other approaches find correspondences between pairs of images by treating the images independently, we instead condition on both images to implicitly take account of the differences between them.
arXiv Detail & Related papers (2020-07-16T17:32:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.