CNN-transformer mixed model for object detection
- URL: http://arxiv.org/abs/2212.06714v1
- Date: Tue, 13 Dec 2022 16:35:35 GMT
- Title: CNN-transformer mixed model for object detection
- Authors: Wenshuo Li
- Abstract summary: In this paper, I propose a convolutional module with a transformer.
It aims to improve the recognition accuracy of the model by fusing the detailed features extracted by CNN with the global features extracted by a transformer.
After 100 rounds of training on the Pascal VOC dataset, the accuracy of the results reached 81%, which is 4.6 better than the faster RCNN[4] using resnet101[5] as the backbone.
- Score: 3.5897534810405403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection, one of the three main tasks of computer vision, has been
used in various applications. The main process is to use deep neural networks
to extract the features of an image and then use the features to identify the
class and location of an object. Therefore, the main direction to improve the
accuracy of object detection tasks is to improve the neural network to extract
features better. In this paper, I propose a convolutional module with a
transformer[1], which aims to improve the recognition accuracy of the model by
fusing the detailed features extracted by CNN[2] with the global features
extracted by a transformer and significantly reduce the computational effort of
the transformer module by deflating the feature mAP. The main execution steps
are convolutional downsampling to reduce the feature map size, then
self-attention calculation and upsampling, and finally concatenation with the
initial input. In the experimental part, after splicing the block to the end of
YOLOv5n[3] and training 300 epochs on the coco dataset, the mAP improved by
1.7% compared with the previous YOLOv5n, and the mAP curve did not show any
saturation phenomenon, so there is still potential for improvement. After 100
rounds of training on the Pascal VOC dataset, the accuracy of the results
reached 81%, which is 4.6 better than the faster RCNN[4] using resnet101[5] as
the backbone, but the number of parameters is less than one-twentieth of it.
Related papers
- LeYOLO, New Scalable and Efficient CNN Architecture for Object Detection [0.0]
We focus on design choices of neural network architectures for efficient object detection based on FLOP.
We propose several optimizations to enhance the efficiency of YOLO-based models.
This paper contributes to a new scaling paradigm for object detection and YOLO-centric models called LeYOLO.
arXiv Detail & Related papers (2024-06-20T12:08:24Z) - Fostc3net:A Lightweight YOLOv5 Based On the Network Structure Optimization [11.969138981034247]
This paper presents an enhanced lightweight YOLOv5 technique customized for mobile devices.
The proposed model achieves a 1% increase in detection accuracy, a 13% reduction in FLOPs, and a 26% decrease in model parameters compared to the existing YOLOv5.
arXiv Detail & Related papers (2024-03-20T16:07:04Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Unsupervised convolutional neural network fusion approach for change
detection in remote sensing images [1.892026266421264]
We introduce a completely unsupervised shallow convolutional neural network (USCNN) fusion approach for change detection.
Our model has three features: the entire training process is conducted in an unsupervised manner, the network architecture is shallow, and the objective function is sparse.
Experimental results on four real remote sensing datasets indicate the feasibility and effectiveness of the proposed approach.
arXiv Detail & Related papers (2023-11-07T03:10:17Z) - Efficient Decoder-free Object Detection with Transformers [75.00499377197475]
Vision transformers (ViTs) are changing the landscape of object detection approaches.
We propose a decoder-free fully transformer-based (DFFT) object detector.
DFFT_SMALL achieves high efficiency in both training and inference stages.
arXiv Detail & Related papers (2022-06-14T13:22:19Z) - DETR++: Taming Your Multi-Scale Detection Transformer [22.522422934209807]
We introduce the Transformer-based detection method, i.e., DETR.
Due to the quadratic complexity in the self-attention mechanism in the Transformer, DETR is never able to incorporate multi-scale features.
We propose DETR++, a new architecture that improves detection results by 1.9% AP on MS COCO 2017, 11.5% AP on RICO icon detection, and 9.1% AP on RICO layout extraction.
arXiv Detail & Related papers (2022-06-07T02:38:31Z) - GradViT: Gradient Inversion of Vision Transformers [83.54779732309653]
We demonstrate the vulnerability of vision transformers (ViTs) to gradient-based inversion attacks.
We introduce a method, named GradViT, that optimize random noise into naturally looking images.
We observe unprecedentedly high fidelity and closeness to the original (hidden) data.
arXiv Detail & Related papers (2022-03-22T17:06:07Z) - Container: Context Aggregation Network [83.12004501984043]
Recent finding shows that a simple based solution without any traditional convolutional or Transformer components can produce effective visual representations.
We present the model (CONText Ion NERtwok), a general-purpose building block for multi-head context aggregation.
In contrast to Transformer-based methods that do not scale well to downstream tasks that rely on larger input image resolutions, our efficient network, named modellight, can be employed in object detection and instance segmentation networks.
arXiv Detail & Related papers (2021-06-02T18:09:11Z) - FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose
Estimation with Decoupled Rotation Mechanism [49.89268018642999]
We propose a fast shape-based network (FS-Net) with efficient category-level feature extraction for 6D pose estimation.
The proposed method achieves state-of-the-art performance in both category- and instance-level 6D object pose estimation.
arXiv Detail & Related papers (2021-03-12T03:07:24Z) - Inception Convolution with Efficient Dilation Search [121.41030859447487]
Dilation convolution is a critical mutant of standard convolution neural network to control effective receptive fields and handle large scale variance of objects.
We propose a new mutant of dilated convolution, namely inception (dilated) convolution where the convolutions have independent dilation among different axes, channels and layers.
We explore a practical method for fitting the complex inception convolution to the data, a simple while effective dilation search algorithm(EDO) based on statistical optimization is developed.
arXiv Detail & Related papers (2020-12-25T14:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.