Semantic Segmentation using Vision Transformers: A survey
- URL: http://arxiv.org/abs/2305.03273v1
- Date: Fri, 5 May 2023 04:11:00 GMT
- Title: Semantic Segmentation using Vision Transformers: A survey
- Authors: Hans Thisanke, Chamli Deshan, Kavindu Chamith, Sachith Seneviratne,
Rajith Vidanaarachchi, Damayanthi Herath
- Abstract summary: Convolutional neural networks (CNN) and Vision Transformers (ViTs) provide the architecture models for semantic segmentation.
ViTs have proven success in image classification, they cannot be directly applied to dense prediction tasks such as image segmentation and object detection.
This survey aims to review and compare the performances of ViT architectures designed for semantic segmentation using benchmarking datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation has a broad range of applications in a variety of
domains including land coverage analysis, autonomous driving, and medical image
analysis. Convolutional neural networks (CNN) and Vision Transformers (ViTs)
provide the architecture models for semantic segmentation. Even though ViTs
have proven success in image classification, they cannot be directly applied to
dense prediction tasks such as image segmentation and object detection since
ViT is not a general purpose backbone due to its patch partitioning scheme. In
this survey, we discuss some of the different ViT architectures that can be
used for semantic segmentation and how their evolution managed the above-stated
challenge. The rise of ViT and its performance with a high success rate
motivated the community to slowly replace the traditional convolutional neural
networks in various computer vision tasks. This survey aims to review and
compare the performances of ViT architectures designed for semantic
segmentation using benchmarking datasets. This will be worthwhile for the
community to yield knowledge regarding the implementations carried out in
semantic segmentation and to discover more efficient methodologies using ViTs.
Related papers
- GiT: Towards Generalist Vision Transformer through Universal Language Interface [94.33443158125186]
This paper proposes a simple, yet effective framework, called GiT, simultaneously applicable for various vision tasks only with a vanilla ViT.
GiT is a multi-task visual model, jointly trained across five representative benchmarks without task-specific fine-tuning.
arXiv Detail & Related papers (2024-03-14T13:47:41Z) - Transformer-Based Visual Segmentation: A Survey [118.01564082499948]
Visual segmentation seeks to partition images, video frames, or point clouds into multiple segments or groups.
Transformers are a type of neural network based on self-attention originally designed for natural language processing.
Transformers offer robust, unified, and even simpler solutions for various segmentation tasks.
arXiv Detail & Related papers (2023-04-19T17:59:02Z) - SegViT: Semantic Segmentation with Plain Vision Transformers [91.50075506561598]
We explore the capability of plain Vision Transformers (ViTs) for semantic segmentation.
We propose the Attention-to-Mask (ATM) module, in which similarity maps between a set of learnable class tokens and the spatial feature maps are transferred to the segmentation masks.
Experiments show that our proposed SegVit using the ATM module outperforms its counterparts using the plain ViT backbone.
arXiv Detail & Related papers (2022-10-12T00:30:26Z) - Vision Transformers: From Semantic Segmentation to Dense Prediction [139.15562023284187]
We explore the global context learning potentials of vision transformers (ViTs) for dense visual prediction.
Our motivation is that through learning global context at full receptive field layer by layer, ViTs may capture stronger long-range dependency information.
We formulate a family of Hierarchical Local-Global (HLG) Transformers, characterized by local attention within windows and global-attention across windows in a pyramidal architecture.
arXiv Detail & Related papers (2022-07-19T15:49:35Z) - Global Context Vision Transformers [78.5346173956383]
We propose global context vision transformer (GC ViT), a novel architecture that enhances parameter and compute utilization for computer vision.
We address the lack of the inductive bias in ViTs, and propose to leverage a modified fused inverted residual blocks in our architecture.
Our proposed GC ViT achieves state-of-the-art results across image classification, object detection and semantic segmentation tasks.
arXiv Detail & Related papers (2022-06-20T18:42:44Z) - A Unified and Biologically-Plausible Relational Graph Representation of
Vision Transformers [11.857392812189872]
Vision transformer (ViT) and its variants have achieved remarkable successes in various visual tasks.
We propose a unified and biologically-plausible relational graph representation of ViT models.
Our work provides a novel unified and biologically-plausible paradigm for more interpretable and effective representation of ViT ANNs.
arXiv Detail & Related papers (2022-05-20T05:53:23Z) - Smoothing Matters: Momentum Transformer for Domain Adaptive Semantic
Segmentation [48.7190017311309]
We find that straightforwardly applying local ViTs in domain adaptive semantic segmentation does not bring in expected improvement.
These high-frequency components make the training of local ViTs very unsmooth and hurt their transferability.
In this paper, we introduce a low-pass filtering mechanism, momentum network, to smooth the learning dynamics of target domain features and pseudo labels.
arXiv Detail & Related papers (2022-03-15T15:20:30Z) - A Comprehensive Study of Vision Transformers on Dense Prediction Tasks [10.013443811899466]
Convolutional Neural Networks (CNNs) have been the standard choice in vision tasks.
Recent studies have shown that Vision Transformers (VTs) achieve comparable performance in challenging tasks such as object detection and semantic segmentation.
This poses several questions about their generalizability, robustness, reliability, and texture bias when used to extract features for complex tasks.
arXiv Detail & Related papers (2022-01-21T13:18:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.