Look at the Neighbor: Distortion-aware Unsupervised Domain Adaptation
for Panoramic Semantic Segmentation
- URL: http://arxiv.org/abs/2308.05493v1
- Date: Thu, 10 Aug 2023 10:47:12 GMT
- Title: Look at the Neighbor: Distortion-aware Unsupervised Domain Adaptation
for Panoramic Semantic Segmentation
- Authors: Xu Zheng, Tianbo Pan, Yunhao Luo, Lin Wang
- Abstract summary: The aim is to tackle the domain gaps caused by the style disparities and distortion problem from the non-uniformly distributed pixels of equirectangular projection (ERP)
We propose a novel UDA framework that can effectively address the distortion problems for panoramic semantic segmentation.
- Score: 5.352137021024213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Endeavors have been recently made to transfer knowledge from the labeled
pinhole image domain to the unlabeled panoramic image domain via Unsupervised
Domain Adaptation (UDA). The aim is to tackle the domain gaps caused by the
style disparities and distortion problem from the non-uniformly distributed
pixels of equirectangular projection (ERP). Previous works typically focus on
transferring knowledge based on geometric priors with specially designed
multi-branch network architectures. As a result, considerable computational
costs are induced, and meanwhile, their generalization abilities are profoundly
hindered by the variation of distortion among pixels. In this paper, we find
that the pixels' neighborhood regions of the ERP indeed introduce less
distortion. Intuitively, we propose a novel UDA framework that can effectively
address the distortion problems for panoramic semantic segmentation. In
comparison, our method is simpler, easier to implement, and more
computationally efficient. Specifically, we propose distortion-aware attention
(DA) capturing the neighboring pixel distribution without using any geometric
constraints. Moreover, we propose a class-wise feature aggregation (CFA) module
to iteratively update the feature representations with a memory bank. As such,
the feature similarity between two domains can be consistently optimized.
Extensive experiments show that our method achieves new state-of-the-art
performance while remarkably reducing 80% parameters.
Related papers
- Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via Probabilistic Prototypical Pixel Contrast [7.092718945468069]
Domain adaptation aims to reduce the model degradation on the target domain caused by the domain shift between the source and target domains.
Probabilistic proto-typical pixel contrast (PPPC) is a universal adaptation framework that models each pixel embedding as a probability.
PPPC not only helps to address ambiguity at the pixel level, yielding discriminative representations but also significant improvements in both synthetic-to-real and day-to-night adaptation tasks.
arXiv Detail & Related papers (2024-09-27T08:25:03Z) - PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain
Adaptative Semantic Segmentation [100.6343963798169]
Unsupervised Domain Adaptation (UDA) aims to enhance the generalization of the learned model to other domains.
We propose a unified pixel- and patch-wise self-supervised learning framework, called PiPa, for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2022-11-14T18:31:24Z) - Pixel-by-Pixel Cross-Domain Alignment for Few-Shot Semantic Segmentation [16.950853152484203]
We consider the task of semantic segmentation in autonomous driving applications.
In this context, aligning the domains is made more challenging by the pixel-wise class imbalance.
We propose a novel framework called Pixel-By-Pixel Cross-Domain Alignment (PixDA)
arXiv Detail & Related papers (2021-10-22T08:27:17Z) - Semantic Distribution-aware Contrastive Adaptation for Semantic
Segmentation [50.621269117524925]
Domain adaptive semantic segmentation refers to making predictions on a certain target domain with only annotations of a specific source domain.
We present a semantic distribution-aware contrastive adaptation algorithm that enables pixel-wise representation alignment.
We evaluate SDCA on multiple benchmarks, achieving considerable improvements over existing algorithms.
arXiv Detail & Related papers (2021-05-11T13:21:25Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - AINet: Association Implantation for Superpixel Segmentation [82.21559299694555]
We propose a novel textbfAssociation textbfImplantation (AI) module to enable the network to explicitly capture the relations between the pixel and its surrounding grids.
Our method could not only achieve state-of-the-art performance but maintain satisfactory inference efficiency.
arXiv Detail & Related papers (2021-01-26T10:40:13Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.