From Contexts to Locality: Ultra-high Resolution Image Segmentation via
Locality-aware Contextual Correlation
- URL: http://arxiv.org/abs/2109.02580v1
- Date: Mon, 6 Sep 2021 16:26:05 GMT
- Title: From Contexts to Locality: Ultra-high Resolution Image Segmentation via
Locality-aware Contextual Correlation
- Authors: Qi Li, Weixiang Yang, Wenxi Liu, Yuanlong Yu, Shengfeng He
- Abstract summary: We innovate the widely used high-resolution image segmentation pipeline.
An ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask.
- Score: 43.70432772819461
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ultra-high resolution image segmentation has raised increasing interests in
recent years due to its realistic applications. In this paper, we innovate the
widely used high-resolution image segmentation pipeline, in which an ultra-high
resolution image is partitioned into regular patches for local segmentation and
then the local results are merged into a high-resolution semantic mask. In
particular, we introduce a novel locality-aware contextual correlation based
segmentation model to process local patches, where the relevance between local
patch and its various contexts are jointly and complementarily utilized to
handle the semantic regions with large variations. Additionally, we present a
contextual semantics refinement network that associates the local segmentation
result with its contextual semantics, and thus is endowed with the ability of
reducing boundary artifacts and refining mask contours during the generation of
final high-resolution mask. Furthermore, in comprehensive experiments, we
demonstrate that our model outperforms other state-of-the-art methods in public
benchmarks. Our released codes are available at
https://github.com/liqiokkk/FCtL.
Related papers
- Associating Spatially-Consistent Grouping with Text-supervised Semantic
Segmentation [117.36746226803993]
We introduce self-supervised spatially-consistent grouping with text-supervised semantic segmentation.
Considering the part-like grouped results, we further adapt a text-supervised model from image-level to region-level recognition.
Our method achieves 59.2% mIoU and 32.4% mIoU on Pascal VOC and Pascal Context benchmarks.
arXiv Detail & Related papers (2023-04-03T16:24:39Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - Global Aggregation then Local Distribution for Scene Parsing [99.1095068574454]
We show that our approach can be modularized as an end-to-end trainable block and easily plugged into existing semantic segmentation networks.
Our approach allows us to build new state of the art on major semantic segmentation benchmarks including Cityscapes, ADE20K, Pascal Context, Camvid and COCO-stuff.
arXiv Detail & Related papers (2021-07-28T03:46:57Z) - Attention Toward Neighbors: A Context Aware Framework for High
Resolution Image Segmentation [2.9210447295585724]
We propose a novel framework to segment a particular patch by incorporating contextual information from its neighboring patches.
This allows the segmentation network to see the target patch with a wider field of view without the need of larger feature maps.
arXiv Detail & Related papers (2021-06-24T10:58:09Z) - Spatially Consistent Representation Learning [12.120041613482558]
We propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks.
We devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region.
On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements.
arXiv Detail & Related papers (2021-03-10T15:23:45Z) - Affinity Space Adaptation for Semantic Segmentation Across Domains [57.31113934195595]
In this paper, we address the problem of unsupervised domain adaptation (UDA) in semantic segmentation.
Motivated by the fact that source and target domain have invariant semantic structures, we propose to exploit such invariance across domains.
We develop two affinity space adaptation strategies: affinity space cleaning and adversarial affinity space alignment.
arXiv Detail & Related papers (2020-09-26T10:28:11Z) - Semantically Adaptive Image-to-image Translation for Domain Adaptation
of Semantic Segmentation [1.8275108630751844]
We address the problem of domain adaptation for semantic segmentation of street scenes.
Many state-of-the-art approaches focus on translating the source image while imposing that the result should be semantically consistent with the input.
We advocate that the image semantics can also be exploited to guide the translation algorithm.
arXiv Detail & Related papers (2020-09-02T16:16:50Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.