3D Vision with Transformers: A Survey
- URL: http://arxiv.org/abs/2208.04309v1
- Date: Mon, 8 Aug 2022 17:59:11 GMT
- Title: 3D Vision with Transformers: A Survey
- Authors: Jean Lahoud, Jiale Cao, Fahad Shahbaz Khan, Hisham Cholakkal, Rao
Muhammad Anwer, Salman Khan, Ming-Hsuan Yang
- Abstract summary: The success of the transformer architecture in natural language processing has triggered attention in the computer vision field.
We present a systematic and thorough review of more than 100 transformers methods for different 3D vision tasks.
We discuss transformer design in 3D vision, which allows it to process data with various 3D representations.
- Score: 114.86385193388439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of the transformer architecture in natural language processing
has recently triggered attention in the computer vision field. The transformer
has been used as a replacement for the widely used convolution operators, due
to its ability to learn long-range dependencies. This replacement was proven to
be successful in numerous tasks, in which several state-of-the-art methods rely
on transformers for better learning. In computer vision, the 3D field has also
witnessed an increase in employing the transformer for 3D convolution neural
networks and multi-layer perceptron networks. Although a number of surveys have
focused on transformers in vision in general, 3D vision requires special
attention due to the difference in data representation and processing when
compared to 2D vision. In this work, we present a systematic and thorough
review of more than 100 transformers methods for different 3D vision tasks,
including classification, segmentation, detection, completion, pose estimation,
and others. We discuss transformer design in 3D vision, which allows it to
process data with various 3D representations. For each application, we
highlight key properties and contributions of proposed transformer-based
methods. To assess the competitiveness of these methods, we compare their
performance to common non-transformer methods on 12 3D benchmarks. We conclude
the survey by discussing different open directions and challenges for
transformers in 3D vision. In addition to the presented papers, we aim to
frequently update the latest relevant papers along with their corresponding
implementations at: https://github.com/lahoud/3d-vision-transformers.
Related papers
- Efficient 3D Object Reconstruction using Visual Transformers [4.670344336401625]
We set out to use visual transformers in place of convolutions in 3D object reconstruction.
Using a transformer-based encoder and decoder to predict 3D structure from 2D images, we achieve accuracy similar or superior to the baseline approach.
arXiv Detail & Related papers (2023-02-16T18:33:25Z) - Transformers in 3D Point Clouds: A Survey [27.784721081318935]
3D Transformer models have been proven to have the remarkable ability of long-range dependencies modeling.
This survey aims to provide a comprehensive overview of 3D Transformers designed for various tasks.
arXiv Detail & Related papers (2022-05-16T01:32:18Z) - A Survey of Visual Transformers [30.082304742571598]
Transformer, an attention-based encoder-decoder architecture, has revolutionized the field of natural language processing.
Some pioneering works have recently been done on adapting Transformer architectures to Computer Vision (CV) fields.
We have provided a comprehensive review of over one hundred different visual Transformers for three fundamental CV tasks.
arXiv Detail & Related papers (2021-11-11T07:56:04Z) - ViDT: An Efficient and Effective Fully Transformer-based Object Detector [97.71746903042968]
Detection transformers are the first fully end-to-end learning systems for object detection.
vision transformers are the first fully transformer-based architecture for image classification.
In this paper, we integrate Vision and Detection Transformers (ViDT) to build an effective and efficient object detector.
arXiv Detail & Related papers (2021-10-08T06:32:05Z) - TransCenter: Transformers with Dense Queries for Multiple-Object
Tracking [87.75122600164167]
We argue that the standard representation -- bounding boxes -- is not adapted to learning transformers for multiple-object tracking.
We propose TransCenter, the first transformer-based architecture for tracking the centers of multiple targets.
arXiv Detail & Related papers (2021-03-28T14:49:36Z) - Self-Supervised Multi-View Learning via Auto-Encoding 3D Transformations [61.870882736758624]
We propose a novel self-supervised paradigm to learn Multi-View Transformation Equivariant Representations (MV-TER)
Specifically, we perform a 3D transformation on a 3D object, and obtain multiple views before and after the transformation via projection.
Then, we self-train a representation to capture the intrinsic 3D object representation by decoding 3D transformation parameters from the fused feature representations of multiple views before and after the transformation.
arXiv Detail & Related papers (2021-03-01T06:24:17Z) - Transformers in Vision: A Survey [101.07348618962111]
Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence.
Transformers require minimal inductive biases for their design and are naturally suited as set-functions.
This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline.
arXiv Detail & Related papers (2021-01-04T18:57:24Z) - A Survey on Visual Transformer [126.56860258176324]
Transformer is a type of deep neural network mainly based on the self-attention mechanism.
In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages.
arXiv Detail & Related papers (2020-12-23T09:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.