EAutoDet: Efficient Architecture Search for Object Detection
- URL: http://arxiv.org/abs/2203.10747v1
- Date: Mon, 21 Mar 2022 05:56:12 GMT
- Title: EAutoDet: Efficient Architecture Search for Object Detection
- Authors: Xiaoxing Wang, Jiale Lin, Junchi Yan, Juanping Zhao, Xiaokang Yang
- Abstract summary: EAutoDet framework can discover practical backbone and FPN architectures for object detection in 1.4 GPU-days.
We propose a kernel reusing technique by sharing the weights of candidate operations on one edge and consolidating them into one convolution.
In particular, the discovered architectures surpass state-of-the-art object detection NAS methods and achieve 40.1 mAP with 120 FPS and 49.2 mAP with 41.3 FPS on COCO test-dev set.
- Score: 110.99532343155073
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Training CNN for detection is time-consuming due to the large dataset and
complex network modules, making it hard to search architectures on detection
datasets directly, which usually requires vast search costs (usually tens and
even hundreds of GPU-days). In contrast, this paper introduces an efficient
framework, named EAutoDet, that can discover practical backbone and FPN
architectures for object detection in 1.4 GPU-days. Specifically, we construct
a supernet for both backbone and FPN modules and adopt the differentiable
method. To reduce the GPU memory requirement and computational cost, we propose
a kernel reusing technique by sharing the weights of candidate operations on
one edge and consolidating them into one convolution. A dynamic channel
refinement strategy is also introduced to search channel numbers. Extensive
experiments show significant efficacy and efficiency of our method. In
particular, the discovered architectures surpass state-of-the-art object
detection NAS methods and achieve 40.1 mAP with 120 FPS and 49.2 mAP with 41.3
FPS on COCO test-dev set. We also transfer the discovered architectures to
rotation detection task, which achieve 77.05 mAP$_{\text{50}}$ on DOTA-v1.0
test set with 21.1M parameters.
Related papers
- DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - Tech Report: One-stage Lightweight Object Detectors [0.38073142980733]
This work is for designing one-stage lightweight detectors which perform well in terms of mAP and latency.
With baseline models each of which targets on GPU and CPU respectively, various operations are applied instead of the main operations in backbone networks of baseline models.
arXiv Detail & Related papers (2022-10-31T09:02:37Z) - Revisiting Efficient Object Detection Backbones from Zero-Shot Neural
Architecture Search [34.88658308647129]
In object detection models, the detection backbone consumes more than half of the overall inference cost.
We propose a novel zero-shot NAS method to address this issue.
The proposed method, named ZenDet, automatically designs efficient detection backbones without training network parameters.
arXiv Detail & Related papers (2021-11-26T07:18:52Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection [82.04372532783931]
Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
arXiv Detail & Related papers (2021-03-08T01:48:53Z) - Towards Improving the Consistency, Efficiency, and Flexibility of
Differentiable Neural Architecture Search [84.4140192638394]
Most differentiable neural architecture search methods construct a super-net for search and derive a target-net as its sub-graph for evaluation.
In this paper, we introduce EnTranNAS that is composed of Engine-cells and Transit-cells.
Our method also spares much memory and computation cost, which speeds up the search process.
arXiv Detail & Related papers (2021-01-27T12:16:47Z) - Representation Sharing for Fast Object Detector Search and Beyond [38.18583590914755]
We propose Fast And Diverse (FAD) to better explore the optimal configuration of receptive fields and convolution types in the sub-networks for one-stage detectors.
FAD achieves prominent improvements on two types of one-stage detectors with various backbones.
arXiv Detail & Related papers (2020-07-23T15:39:44Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.