Compressed Object Detection
- URL: http://arxiv.org/abs/2102.02896v1
- Date: Thu, 4 Feb 2021 21:32:56 GMT
- Title: Compressed Object Detection
- Authors: Gedeon Muhawenayo and Georgia Gkioxari
- Abstract summary: We extend pruning, a compression technique that discards unnecessary model connections, and weight sharing techniques for the task of object detection.
We are able to compress a state-of-the-art object detection model by 30.0% without a loss in performance.
- Score: 15.893905488328283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning approaches have achieved unprecedented performance in visual
recognition tasks such as object detection and pose estimation. However,
state-of-the-art models have millions of parameters represented as floats which
make them computationally expensive and constrain their deployment on hardware
such as mobile phones and IoT nodes. Most commonly, activations of deep neural
networks tend to be sparse thus proving that models are over parametrized with
redundant neurons. Model compression techniques, such as pruning and
quantization, have recently shown promising results by improving model
complexity with little loss in performance. In this work, we extended pruning,
a compression technique that discards unnecessary model connections, and weight
sharing techniques for the task of object detection. With our approach, we are
able to compress a state-of-the-art object detection model by 30.0% without a
loss in performance. We also show that our compressed model can be easily
initialized with existing pre-trained weights, and thus is able to fully
utilize published state-of-the-art model zoos.
Related papers
- Comprehensive Study on Performance Evaluation and Optimization of Model Compression: Bridging Traditional Deep Learning and Large Language Models [0.0]
An increase in the number of connected devices around the world warrants compressed models that can be easily deployed at the local devices with low compute capacity and power accessibility.
We implemented both, quantization and pruning, compression techniques on popular deep learning models used in the image classification, object detection, language models and generative models-based problem statements.
arXiv Detail & Related papers (2024-07-22T14:20:53Z) - Uncovering the Hidden Cost of Model Compression [43.62624133952414]
Visual Prompting has emerged as a pivotal method for transfer learning in computer vision.
Model compression detrimentally impacts the performance of visual prompting-based transfer.
However, negative effects on calibration are not present when models are compressed via quantization.
arXiv Detail & Related papers (2023-08-29T01:47:49Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Contemplating real-world object classification [53.10151901863263]
We reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations.
We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement.
arXiv Detail & Related papers (2021-03-08T23:29:59Z) - Robustness in Compressed Neural Networks for Object Detection [2.9823962001574182]
The sensitivity of compressed models to different distortion types is nuanced.
Some of the corruptions are heavily impacted by the compression methods.
Data augmentation was confirmed to positively affect models' robustness.
arXiv Detail & Related papers (2021-02-10T15:52:11Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Spatial-Temporal Alignment Network for Action Recognition and Detection [80.19235282200697]
This paper studies how to introduce viewpoint-invariant feature representations that can help action recognition and detection.
We propose a novel Spatial-Temporal Alignment Network (STAN) that aims to learn geometric invariant representations for action recognition and action detection.
We test our STAN model extensively on AVA, Kinetics-400, AVA-Kinetics, Charades, and Charades-Ego datasets.
arXiv Detail & Related papers (2020-12-04T06:23:40Z) - Self-Supervised GAN Compression [32.21713098893454]
We show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods.
We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator.
We show that this framework has a compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different pruning granularities.
arXiv Detail & Related papers (2020-07-03T04:18:54Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.