PnP-DETR: Towards Efficient Visual Analysis with Transformers
- URL: http://arxiv.org/abs/2109.07036v2
- Date: Thu, 16 Sep 2021 02:55:03 GMT
- Title: PnP-DETR: Towards Efficient Visual Analysis with Transformers
- Authors: Tao Wang, Li Yuan, Yunpeng Chen, Jiashi Feng, Shuicheng Yan
- Abstract summary: Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
- Score: 146.55679348493587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, DETR pioneered the solution of vision tasks with transformers, it
directly translates the image feature map into the object detection result.
Though effective, translating the full feature map can be costly due to
redundant computation on some area like the background. In this work, we
encapsulate the idea of reducing spatial redundancy into a novel poll and pool
(PnP) sampling module, with which we build an end-to-end PnP-DETR architecture
that adaptively allocates its computation spatially to be more efficient.
Concretely, the PnP module abstracts the image feature map into fine foreground
object feature vectors and a small number of coarse background contextual
feature vectors. The transformer models information interaction within the
fine-coarse feature space and translates the features into the detection
result. Moreover, the PnP-augmented model can instantly achieve various desired
trade-offs between performance and computation with a single model by varying
the sampled feature length, without requiring to train multiple models as
existing methods. Thus it offers greater flexibility for deployment in diverse
scenarios with varying computation constraint. We further validate the
generalizability of the PnP module on panoptic segmentation and the recent
transformer-based image recognition model ViT and show consistent efficiency
gain. We believe our method makes a step for efficient visual analysis with
transformers, wherein spatial redundancy is commonly observed. Code will be
available at \url{https://github.com/twangnh/pnp-detr}.
Related papers
- Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - CFPFormer: Feature-pyramid like Transformer Decoder for Segmentation and Detection [1.837431956557716]
Feature pyramids have been widely adopted in convolutional neural networks (CNNs) and transformers for tasks like medical image segmentation and object detection.
We propose a novel decoder block that integrates feature pyramids and transformers.
Our model achieves superior performance in detecting small objects compared to existing methods.
arXiv Detail & Related papers (2024-04-23T18:46:07Z) - Look-Around Before You Leap: High-Frequency Injected Transformer for Image Restoration [46.96362010335177]
In this paper, we propose HIT, a simple yet effective High-frequency Injected Transformer for image restoration.
Specifically, we design a window-wise injection module (WIM), which incorporates abundant high-frequency details into the feature map, to provide reliable references for restoring high-quality images.
In addition, we introduce a spatial enhancement unit (SEU) to preserve essential spatial relationships that may be lost due to the computations carried out across channel dimensions in the BIM.
arXiv Detail & Related papers (2024-03-30T08:05:00Z) - Contrastive Learning for Multi-Object Tracking with Transformers [79.61791059432558]
We show how DETR can be turned into a MOT model by employing an instance-level contrastive loss.
Our training scheme learns object appearances while preserving detection capabilities and with little overhead.
Its performance surpasses the previous state-of-the-art by +2.6 mMOTA on the challenging BDD100K dataset.
arXiv Detail & Related papers (2023-11-14T10:07:52Z) - Optimizing Vision Transformers for Medical Image Segmentation and
Few-Shot Domain Adaptation [11.690799827071606]
We propose Convolutional Swin-Unet (CS-Unet) transformer blocks and optimise their settings with relation to patch embedding, projection, the feed-forward network, up sampling and skip connections.
CS-Unet can be trained from scratch and inherits the superiority of convolutions in each feature process phase.
Experiments show that CS-Unet without pre-training surpasses other state-of-the-art counterparts by large margins on two medical CT and MRI datasets with fewer parameters.
arXiv Detail & Related papers (2022-10-14T19:18:52Z) - Time-Space Transformers for Video Panoptic Segmentation [3.2489082010225494]
We propose a solution that simultaneously predicts pixel-level semantic and clip-level instance segmentation.
Our network, named VPS-Transformer, combines a convolutional architecture for single-frame panoptic segmentation and a video module based on an instantiation of a pure Transformer block.
arXiv Detail & Related papers (2022-10-07T13:30:11Z) - Cross-receptive Focused Inference Network for Lightweight Image
Super-Resolution [64.25751738088015]
Transformer-based methods have shown impressive performance in single image super-resolution (SISR) tasks.
Transformers that need to incorporate contextual information to extract features dynamically are neglected.
We propose a lightweight Cross-receptive Focused Inference Network (CFIN) that consists of a cascade of CT Blocks mixed with CNN and Transformer.
arXiv Detail & Related papers (2022-07-06T16:32:29Z) - Visual Saliency Transformer [127.33678448761599]
We develop a novel unified model based on a pure transformer, Visual Saliency Transformer (VST), for both RGB and RGB-D salient object detection (SOD)
It takes image patches as inputs and leverages the transformer to propagate global contexts among image patches.
Experimental results show that our model outperforms existing state-of-the-art results on both RGB and RGB-D SOD benchmark datasets.
arXiv Detail & Related papers (2021-04-25T08:24:06Z) - End-to-End Trainable Multi-Instance Pose Estimation with Transformers [68.93512627479197]
We propose a new end-to-end trainable approach for multi-instance pose estimation by combining a convolutional neural network with a transformer.
Inspired by recent work on end-to-end trainable object detection with transformers, we use a transformer encoder-decoder architecture together with a bipartite matching scheme to directly regress the pose of all individuals in a given image.
Our model, called POse Estimation Transformer (POET), is trained using a novel set-based global loss that consists of a keypoint loss, a keypoint visibility loss, a center loss and a class loss.
arXiv Detail & Related papers (2021-03-22T18:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.