DepthFormer: Multimodal Positional Encodings and Cross-Input Attention
for Transformer-Based Segmentation Networks
- URL: http://arxiv.org/abs/2211.04188v2
- Date: Mon, 27 Mar 2023 12:54:49 GMT
- Title: DepthFormer: Multimodal Positional Encodings and Cross-Input Attention
for Transformer-Based Segmentation Networks
- Authors: Francesco Barbato, Giulia Rizzoli, Pietro Zanuttigh
- Abstract summary: We focus on transformer-based deep learning architectures, that have achieved state-of-the-art performances on the segmentation task.
We propose to employ depth information by embedding it in the positional encoding.
Our approach consistently improves performances on the Cityscapes benchmark.
- Score: 13.858051019755283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most approaches for semantic segmentation use only information from color
cameras to parse the scenes, yet recent advancements show that using depth data
allows to further improve performances. In this work, we focus on
transformer-based deep learning architectures, that have achieved
state-of-the-art performances on the segmentation task, and we propose to
employ depth information by embedding it in the positional encoding.
Effectively, we extend the network to multimodal data without adding any
parameters and in a natural way that makes use of the strength of transformers'
self-attention modules. We also investigate the idea of performing
cross-modality operations inside the attention module, swapping the key inputs
between the depth and color branches. Our approach consistently improves
performances on the Cityscapes benchmark.
Related papers
- SDformer: Efficient End-to-End Transformer for Depth Completion [5.864200786548098]
Depth completion aims to predict dense depth maps with sparse depth measurements from a depth sensor.
Currently, Convolutional Neural Network (CNN) based models are the most popular methods applied to depth completion tasks.
To overcome the drawbacks of CNNs, a more effective and powerful method has been presented, which is an adaptive self-attention setting sequence-to-sequence model.
arXiv Detail & Related papers (2024-09-12T15:52:08Z) - ParaTransCNN: Parallelized TransCNN Encoder for Medical Image
Segmentation [7.955518153976858]
We propose an advanced 2D feature extraction method by combining the convolutional neural network and Transformer architectures.
Our method is shown with better segmentation accuracy, especially on small organs.
arXiv Detail & Related papers (2024-01-27T05:58:36Z) - Optimizing rgb-d semantic segmentation through multi-modal interaction
and pooling attention [5.518612382697244]
Multi-modal Interaction and Pooling Attention Network (MIPANet) is designed to harness the interactive synergy between RGB and depth modalities.
We introduce a Pooling Attention Module (PAM) at various stages of the encoder.
This module serves to amplify the features extracted by the network and integrates the module's output into the decoder.
arXiv Detail & Related papers (2023-11-19T12:25:59Z) - Source-Free Domain Adaptation for RGB-D Semantic Segmentation with
Vision Transformers [11.13182313760599]
We propose MISFIT: MultImodal Source-Free Information fusion Transformer, a depth-aware framework for source-free semantic segmentation.
Our framework, which is also the first approach using RGB-D vision transformers for source-free semantic segmentation, shows noticeable performance improvements.
arXiv Detail & Related papers (2023-05-23T17:20:47Z) - Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient
Object Detection [67.33924278729903]
In this work, we propose Dual Swin-Transformer based Mutual Interactive Network.
We adopt Swin-Transformer as the feature extractor for both RGB and depth modality to model the long-range dependencies in visual inputs.
Comprehensive experiments on five standard RGB-D SOD benchmark datasets demonstrate the superiority of the proposed DTMINet method.
arXiv Detail & Related papers (2022-06-07T08:35:41Z) - SeMask: Semantically Masked Transformers for Semantic Segmentation [10.15763397352378]
SeMask is a framework that incorporates semantic information into the encoder with the help of a semantic attention operation.
Our framework achieves a new state-of-the-art of 58.22% mIoU on the ADE20K dataset and improvements of over 3% in the mIoU metric on the Cityscapes dataset.
arXiv Detail & Related papers (2021-12-23T18:56:02Z) - LAVT: Language-Aware Vision Transformer for Referring Image Segmentation [80.54244087314025]
We show that better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in vision Transformer encoder network.
Our method surpasses the previous state-of-the-art methods on RefCOCO, RefCO+, and G-Ref by large margins.
arXiv Detail & Related papers (2021-12-04T04:53:35Z) - Less is More: Pay Less Attention in Vision Transformers [61.05787583247392]
Less attention vIsion Transformer builds upon the fact that convolutions, fully-connected layers, and self-attentions have almost equivalent mathematical expressions for processing image patch sequences.
The proposed LIT achieves promising performance on image recognition tasks, including image classification, object detection and instance segmentation.
arXiv Detail & Related papers (2021-05-29T05:26:07Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Encoder Fusion Network with Co-Attention Embedding for Referring Image
Segmentation [87.01669173673288]
We propose an encoder fusion network (EFN), which transforms the visual encoder into a multi-modal feature learning network.
A co-attention mechanism is embedded in the EFN to realize the parallel update of multi-modal features.
The experiment results on four benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-05-05T02:27:25Z) - Beyond Single Stage Encoder-Decoder Networks: Deep Decoders for Semantic
Image Segmentation [56.44853893149365]
Single encoder-decoder methodologies for semantic segmentation are reaching their peak in terms of segmentation quality and efficiency per number of layers.
We propose a new architecture based on a decoder which uses a set of shallow networks for capturing more information content.
In order to further improve the architecture we introduce a weight function which aims to re-balance classes to increase the attention of the networks to under-represented objects.
arXiv Detail & Related papers (2020-07-19T18:44:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.