DeepPicarMicro: Applying TinyML to Autonomous Cyber Physical Systems
- URL: http://arxiv.org/abs/2208.11212v1
- Date: Tue, 23 Aug 2022 21:58:53 GMT
- Title: DeepPicarMicro: Applying TinyML to Autonomous Cyber Physical Systems
- Authors: Michael Bechtel, QiTao Weng, Heechul Yun
- Abstract summary: We present DeepPicarMicro, a small self-driving RC car testbed, which runs a convolutional neural network (CNN) on a Raspberry Pi Pico MCU.
We apply a state-of-the-art DNN optimization to successfully fit the well-known PilotNet CNN architecture.
We observe an interesting relationship between the accuracy, latency, and control performance of a system.
- Score: 2.2667044691227636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Running deep neural networks (DNNs) on tiny Micro-controller Units (MCUs) is
challenging due to their limitations in computing, memory, and storage
capacity. Fortunately, recent advances in both MCU hardware and machine
learning software frameworks make it possible to run fairly complex neural
networks on modern MCUs, resulting in a new field of study widely known as
TinyML. However, there have been few studies to show the potential for TinyML
applications in cyber physical systems (CPS). In this paper, we present
DeepPicarMicro, a small self-driving RC car testbed, which runs a convolutional
neural network (CNN) on a Raspberry Pi Pico MCU. We apply a state-of-the-art
DNN optimization to successfully fit the well-known PilotNet CNN architecture,
which was used to drive NVIDIA's real self-driving car, on the MCU. We apply a
state-of-art network architecture search (NAS) approach to find further
optimized networks that can effectively control the car in real-time in an
end-to-end manner. From an extensive systematic experimental evaluation study,
we observe an interesting relationship between the accuracy, latency, and
control performance of a system. From this, we propose a joint optimization
strategy that takes both accuracy and latency of a model in the network
architecture search process for AI enabled CPS.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Efficient Neural Networks for Tiny Machine Learning: A Comprehensive
Review [1.049712834719005]
This review provides an in-depth analysis of the advancements in efficient neural networks and the deployment of deep learning models on ultra-low power microcontrollers.
The core of the review centres on efficient neural networks for TinyML.
It covers techniques such as model compression, quantization, and low-rank factorization, which optimize neural network architectures for minimal resource utilization.
The paper then delves into the deployment of deep learning models on ultra-low power MCUs, addressing challenges such as limited computational capabilities and memory resources.
arXiv Detail & Related papers (2023-11-20T16:20:13Z) - LaneSNNs: Spiking Neural Networks for Lane Detection on the Loihi
Neuromorphic Processor [12.47874622269824]
We present a new SNN-based approach, called LaneSNN, for detecting the lanes marked on the streets using the event-based camera input.
We implement and map the learned SNNs models onto the Intel Loihi Neuromorphic Research Chip.
For the loss function, we develop a novel method based on the linear composition of Weighted binary Cross Entropy (WCE) and Mean Squared Error (MSE) measures.
arXiv Detail & Related papers (2022-08-03T14:51:15Z) - Energy-efficient Deployment of Deep Learning Applications on Cortex-M
based Microcontrollers using Deep Compression [1.4050836886292872]
This paper investigates the efficient deployment of deep learning models on resource-constrained microcontrollers.
We present a methodology for the systematic exploration of different DNN pruning, quantization, and deployment strategies.
We show that we can compress them to below 10% of their original parameter count before their predictive quality decreases.
arXiv Detail & Related papers (2022-05-20T10:55:42Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning [72.80896338009579]
We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs.
We propose a generic patch-by-patch inference scheduling, which significantly cuts down the peak memory.
We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2.
arXiv Detail & Related papers (2021-10-28T17:58:45Z) - TinyOL: TinyML with Online-Learning on Microcontrollers [7.172671995820974]
Tiny machine learning (TinyML) is committed to democratizing deep learning for all-pervasive microcontrollers (MCUs)
Current TinyML solutions are based on batch/offline settings and support only the neural network's inference on MCUs.
We propose a novel system called TinyOL (TinyML with Online-Learning), which enables incremental on-device training on streaming data.
arXiv Detail & Related papers (2021-03-15T11:39:41Z) - Neural Architecture Search of SPD Manifold Networks [79.45110063435617]
We propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
We first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design.
We exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search.
arXiv Detail & Related papers (2020-10-27T18:08:57Z) - MicroNets: Neural Network Architectures for Deploying TinyML
Applications on Commodity Microcontrollers [18.662026553041937]
Machine learning on resource constrained microcontrollers (MCUs) promises to drastically expand the application space of the Internet of Things (IoT)
TinyML presents severe technical challenges, as deep neural network inference demands a large compute and memory budget.
neural architecture search (NAS) promises to help design accurate ML models that meet the tight MCU memory, latency and energy constraints.
arXiv Detail & Related papers (2020-10-21T19:39:39Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.