EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design
for Real-time Facial Expression Recognition
- URL: http://arxiv.org/abs/2006.15759v1
- Date: Mon, 29 Jun 2020 00:48:05 GMT
- Title: EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design
for Real-time Facial Expression Recognition
- Authors: James Ren Hou Lee, Linda Wang, and Alexander Wong
- Abstract summary: This study proposes EmotionNet Nano, an efficient deep convolutional neural network created through a human-machine collaborative design strategy.
Two different variants of EmotionNet Nano are presented, each with a different trade-off between architectural and computational complexity and accuracy.
We demonstrate that the proposed EmotionNet Nano networks achieved real-time inference speeds (e.g. $>25$ FPS and $>70$ FPS at 15W and 30W, respectively) and high energy efficiency.
- Score: 75.74756992992147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recent advances in deep learning have led to significant improvements
in facial expression classification (FEC), a major challenge that remains a
bottleneck for the widespread deployment of such systems is their high
architectural and computational complexities. This is especially challenging
given the operational requirements of various FEC applications, such as safety,
marketing, learning, and assistive living, where real-time requirements on
low-cost embedded devices is desired. Motivated by this need for a compact, low
latency, yet accurate system capable of performing FEC in real-time on low-cost
embedded devices, this study proposes EmotionNet Nano, an efficient deep
convolutional neural network created through a human-machine collaborative
design strategy, where human experience is combined with machine meticulousness
and speed in order to craft a deep neural network design catered towards
real-time embedded usage. Two different variants of EmotionNet Nano are
presented, each with a different trade-off between architectural and
computational complexity and accuracy. Experimental results using the CK+
facial expression benchmark dataset demonstrate that the proposed EmotionNet
Nano networks demonstrated accuracies comparable to state-of-the-art in FEC
networks, while requiring significantly fewer parameters (e.g., 23$\times$
fewer at a higher accuracy). Furthermore, we demonstrate that the proposed
EmotionNet Nano networks achieved real-time inference speeds (e.g. $>25$ FPS
and $>70$ FPS at 15W and 30W, respectively) and high energy efficiency (e.g.
$>1.7$ images/sec/watt at 15W) on an ARM embedded processor, thus further
illustrating the efficacy of EmotionNet Nano for deployment on embedded
devices.
Related papers
- Fast GraspNeXt: A Fast Self-Attention Neural Network Architecture for
Multi-task Learning in Computer Vision Tasks for Robotic Grasping on the Edge [80.88063189896718]
High architectural and computational complexity can result in poor suitability for deployment on embedded devices.
Fast GraspNeXt is a fast self-attention neural network architecture tailored for embedded multi-task learning in computer vision tasks for robotic grasping.
arXiv Detail & Related papers (2023-04-21T18:07:14Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - InstantNet: Automated Generation and Deployment of Instantaneously
Switchable-Precision Networks [65.78061366594106]
We propose InstantNet to automatically generate and deploy instantaneously switchable-precision networks which operate at variable bit-widths.
In experiments, the proposed InstantNet consistently outperforms state-of-the-art designs.
arXiv Detail & Related papers (2021-04-22T04:07:43Z) - ExPAN(N)D: Exploring Posits for Efficient Artificial Neural Network
Design in FPGA-based Systems [4.2612881037640085]
This paper analyzes and ingathers the efficacy of the Posit number representation scheme and the efficiency of fixed-point arithmetic implementations for ANNs.
We propose a novel Posit to fixed-point converter for enabling high-performance and energy-efficient hardware implementations for ANNs.
arXiv Detail & Related papers (2020-10-24T11:02:25Z) - AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via
Visual Attention Condensers [81.17461895644003]
We introduce AttendNets, low-precision, highly compact deep neural networks tailored for on-device image recognition.
AttendNets possess deep self-attention architectures based on visual attention condensers.
Results show AttendNets have significantly lower architectural and computational complexity when compared to several deep neural networks.
arXiv Detail & Related papers (2020-09-30T01:53:17Z) - DepthNet Nano: A Highly Compact Self-Normalizing Neural Network for
Monocular Depth Estimation [76.90627702089357]
DepthNet Nano is a compact deep neural network for monocular depth estimation designed using a human machine collaborative design strategy.
The proposed DepthNet Nano possesses a highly efficient network architecture, while still achieving comparable performance with state-of-the-art networks.
arXiv Detail & Related papers (2020-04-17T00:41:35Z) - Lightweight Residual Densely Connected Convolutional Neural Network [18.310331378001397]
The lightweight residual densely connected blocks are proposed to guaranty the deep supervision, efficient gradient flow, and feature reuse abilities of convolutional neural network.
The proposed method decreases the cost of training and inference processes without using any special hardware-software equipment.
arXiv Detail & Related papers (2020-01-02T17:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.