Revisiting Efficient Object Detection Backbones from Zero-Shot Neural
Architecture Search
- URL: http://arxiv.org/abs/2111.13336v1
- Date: Fri, 26 Nov 2021 07:18:52 GMT
- Title: Revisiting Efficient Object Detection Backbones from Zero-Shot Neural
Architecture Search
- Authors: Zhenhong Sun, Ming Lin, Xiuyu Sun, Zhiyu Tan and Rong Jin
- Abstract summary: In object detection models, the detection backbone consumes more than half of the overall inference cost.
We propose a novel zero-shot NAS method to address this issue.
The proposed method, named ZenDet, automatically designs efficient detection backbones without training network parameters.
- Score: 34.88658308647129
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In object detection models, the detection backbone consumes more than half of
the overall inference cost. Recent researches attempt to reduce this cost by
optimizing the backbone architecture with the help of Neural Architecture
Search (NAS). However, existing NAS methods for object detection require
hundreds to thousands of GPU hours of searching, making them impractical in
fast-paced research and development. In this work, we propose a novel zero-shot
NAS method to address this issue. The proposed method, named ZenDet,
automatically designs efficient detection backbones without training network
parameters, reducing the architecture design cost to nearly zero yet delivering
the state-of-the-art (SOTA) performance. Under the hood, ZenDet maximizes the
differential entropy of detection backbones, leading to a better feature
extractor for object detection under the same computational budgets. After
merely one GPU day of fully automatic design, ZenDet innovates SOTA detection
backbones on multiple detection benchmark datasets with little human
intervention. Comparing to ResNet-50 backbone, ZenDet is +2.0% better in mAP
when using the same amount of FLOPs/parameters and is 1.54 times faster on
NVIDIA V100 at the same mAP. Code and pre-trained models will be released
later.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - EAutoDet: Efficient Architecture Search for Object Detection [110.99532343155073]
EAutoDet framework can discover practical backbone and FPN architectures for object detection in 1.4 GPU-days.
We propose a kernel reusing technique by sharing the weights of candidate operations on one edge and consolidating them into one convolution.
In particular, the discovered architectures surpass state-of-the-art object detection NAS methods and achieve 40.1 mAP with 120 FPS and 49.2 mAP with 41.3 FPS on COCO test-dev set.
arXiv Detail & Related papers (2022-03-21T05:56:12Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - EEEA-Net: An Early Exit Evolutionary Neural Architecture Search [6.569256728493014]
Search for Convolutional Neural Network (CNN) architectures suitable for an on-device processor with limited computing resources.
New algorithm entitled an Early Exit Population Initialisation (EE-PI) for Evolutionary Algorithm (EA) developed.
EA-Net achieved the lowest error rate among state-of-the-art NAS models, with 2.46% for CIFAR-10, 15.02% for CIFAR-100, and 23.8% for ImageNet dataset.
arXiv Detail & Related papers (2021-08-13T10:23:19Z) - Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition [43.97052733871721]
A key component in Neural Architecture Search (NAS) is an accuracy predictor which asserts the accuracy of a queried architecture.
We propose to replace the accuracy predictor with a novel model-complexity index named Zen-score.
Instead of predicting model accuracy, Zen-score directly asserts the model complexity of a network without training its parameters.
arXiv Detail & Related papers (2021-02-01T18:53:40Z) - Fast Neural Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
Deep neural networks achieve remarkable performance in many computer vision tasks.
Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone.
One major challenge though, is that ImageNet pre-training of the search space representation incurs huge computational cost.
In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network.
arXiv Detail & Related papers (2020-01-08T13:45:15Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.