An Empirical Study of Adder Neural Networks for Object Detection
- URL: http://arxiv.org/abs/2112.13608v1
- Date: Mon, 27 Dec 2021 11:03:13 GMT
- Title: An Empirical Study of Adder Neural Networks for Object Detection
- Authors: Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang
- Abstract summary: Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations.
We present an empirical study of AdderNets for object detection.
- Score: 67.64041181937624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adder neural networks (AdderNets) have shown impressive performance on image
classification with only addition operations, which are more energy efficient
than traditional convolutional neural networks built with multiplications.
Compared with classification, there is a strong demand on reducing the energy
consumption of modern object detectors via AdderNets for real-world
applications such as autonomous driving and face detection. In this paper, we
present an empirical study of AdderNets for object detection. We first reveal
that the batch normalization statistics in the pre-trained adder backbone
should not be frozen, since the relatively large feature variance of AdderNets.
Moreover, we insert more shortcut connections in the neck part and design a new
feature fusion architecture for avoiding the sparse features of adder layers.
We present extensive ablation studies to explore several design choices of
adder detectors. Comparisons with state-of-the-arts are conducted on COCO and
PASCAL VOC benchmarks. Specifically, the proposed Adder FCOS achieves a 37.8\%
AP on the COCO val set, demonstrating comparable performance to that of the
convolutional counterpart with an about $1.4\times$ energy reduction.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Adder Neural Networks [75.54239599016535]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks.
In AdderNets, we take the $ell_p$-norm distance between filters and input feature as the output response.
We show that the proposed AdderNets can achieve 75.7% Top-1 accuracy 92.3% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2021-05-29T04:02:51Z) - AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence [111.09105910265154]
We present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet)
The whole AdderNet can practically achieve 16% enhancement in speed.
We conclude the AdderNet is able to surpass all the other competitors.
arXiv Detail & Related papers (2021-01-25T11:31:52Z) - Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection
in Neural Networks [3.125321230840342]
Adversarial examples are inputs that have been carefully perturbed to fool classifier networks, while appearing unchanged to humans.
We propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
We show that our method improves state-of-the-art detector robustness against adversarial examples.
arXiv Detail & Related papers (2021-01-05T14:31:53Z) - AdderSR: Towards Energy Efficient Image Super-Resolution [127.61437479490047]
This paper studies the single image super-resolution problem using adder neural networks (AdderNet)
Compared with convolutional neural networks, AdderNet utilizing additions to calculate the output features thus avoid massive energy consumptions of conventional multiplications.
arXiv Detail & Related papers (2020-09-18T15:29:13Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - AdderNet: Do We Really Need Multiplications in Deep Learning? [159.174891462064]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks for much cheaper additions to reduce computation costs.
We develop a special back-propagation approach for AdderNets by investigating the full-precision gradient.
As a result, the proposed AdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2019-12-31T06:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.