PhiNets: a scalable backbone for low-power AI at the edge
- URL: http://arxiv.org/abs/2110.00337v1
- Date: Fri, 1 Oct 2021 12:03:25 GMT
- Title: PhiNets: a scalable backbone for low-power AI at the edge
- Authors: Francesco Paissan, Alberto Ancilotto, and Elisabetta Farella
- Abstract summary: We present PhiNets, a new scalable backbone optimized for deep-learning-based image processing on resource-constrained platforms.
PhiNets are based on inverted residual blocks specifically designed to decouple the computational cost, working memory, and parameter memory.
We demonstrate our approach on a prototype node based on a STM32H743 microcontroller.
- Score: 2.7910505923792646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the Internet of Things era, where we see many interconnected and
heterogeneous mobile and fixed smart devices, distributing the intelligence
from the cloud to the edge has become a necessity. Due to limited computational
and communication capabilities, low memory and limited energy budget, bringing
artificial intelligence algorithms to peripheral devices, such as the end-nodes
of a sensor network, is a challenging task and requires the design of
innovative methods. In this work, we present PhiNets, a new scalable backbone
optimized for deep-learning-based image processing on resource-constrained
platforms. PhiNets are based on inverted residual blocks specifically designed
to decouple the computational cost, working memory, and parameter memory, thus
exploiting all the available resources. With a YoloV2 detection head and Simple
Online and Realtime Tracking, the proposed architecture has achieved the
state-of-the-art results in (i) detection on the COCO and VOC2012 benchmarks,
and (ii) tracking on the MOT15 benchmark. PhiNets reduce the parameter count of
87% to 93% with respect to previous state-of-the-art models (EfficientNetv1,
MobileNetv2) and achieve better performance with lower computational cost.
Moreover, we demonstrate our approach on a prototype node based on a STM32H743
microcontroller (MCU) with 2MB of internal Flash and 1MB of RAM and achieve
power requirements in the order of 10 mW. The code for the PhiNets is publicly
available on GitHub.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - SpikeBottleNet: Spike-Driven Feature Compression Architecture for Edge-Cloud Co-Inference [0.86325068644655]
We propose SpikeBottleNet, a novel architecture for edge-cloud co-inference systems.
SpikeBottleNet integrates a spiking neuron model to significantly reduce energy consumption on edge devices.
arXiv Detail & Related papers (2024-10-11T09:59:21Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - HeteroEdge: Addressing Asymmetry in Heterogeneous Collaborative
Autonomous Systems [1.274065448486689]
We propose a self-adaptive optimization framework for a testbed comprising two Unmanned Ground Vehicles (UGVs) and two NVIDIA Jetson devices.
This framework efficiently manages multiple tasks (storage, processing, computation, transmission, inference) on heterogeneous nodes concurrently.
It involves compressing and masking input image frames, identifying similar frames, and profiling devices to obtain boundary conditions for optimization.
arXiv Detail & Related papers (2023-05-05T02:43:16Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Implementation of a Binary Neural Network on a Passive Array of Magnetic
Tunnel Junctions [2.917306244908168]
We leverage the low-power and the inherently binary operation of magnetic tunnel junctions (MTJs) to demonstrate neural network hardware inference based on passive arrays of MTJs.
We achieve software-equivalent accuracy of up to 95.3 % with proper tuning of network parameters in 15 x 15 MTJ arrays having a range of device sizes.
arXiv Detail & Related papers (2021-12-16T19:11:29Z) - Neural network relief: a pruning algorithm based on neural activity [47.57448823030151]
We propose a simple importance-score metric that deactivates unimportant connections.
We achieve comparable performance for LeNet architectures on MNIST.
The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations.
arXiv Detail & Related papers (2021-09-22T15:33:49Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet
Implementation for Edge Motor-Imagery Brain--Machine Interfaces [16.381467082472515]
Motor-Imagery Brain--Machine Interfaces (MI-BMIs)promise direct and accessible communication between human brains and machines.
Deep learning models have emerged for classifying EEG signals.
These models often exceed the limitations of edge devices due to their memory and computational requirements.
arXiv Detail & Related papers (2020-04-24T12:29:03Z) - Lightweight Residual Densely Connected Convolutional Neural Network [18.310331378001397]
The lightweight residual densely connected blocks are proposed to guaranty the deep supervision, efficient gradient flow, and feature reuse abilities of convolutional neural network.
The proposed method decreases the cost of training and inference processes without using any special hardware-software equipment.
arXiv Detail & Related papers (2020-01-02T17:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.