Robustifying the Deployment of tinyML Models for Autonomous
mini-vehicles
- URL: http://arxiv.org/abs/2007.00302v2
- Date: Sat, 13 Feb 2021 20:38:02 GMT
- Title: Robustifying the Deployment of tinyML Models for Autonomous
mini-vehicles
- Authors: Miguel de Prado, Manuele Rusci, Romain Donze, Alessandro Capotondi,
Serge Monnerat, Luca Benini and, Nuria Pazos
- Abstract summary: We propose a closed-loop learning flow for autonomous driving mini-vehicles that includes the target environment in-the-loop.
We leverage a family of tinyCNNs to control the mini-vehicle, which learn in the target environment by imitating a computer vision algorithm, i.e., the expert.
When running the family of CNNs, our solution outperforms any other implementation on the STM32L4 and k64f (Cortex-M4), reducing the latency by over 13x and the energy consummation by 92%.
- Score: 61.27933385742613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard-size autonomous navigation vehicles have rapidly improved thanks to
the breakthroughs of deep learning. However, scaling autonomous driving to
low-power systems deployed on dynamic environments poses several challenges
that prevent their adoption. To address them, we propose a closed-loop learning
flow for autonomous driving mini-vehicles that includes the target environment
in-the-loop. We leverage a family of compact and high-throughput tinyCNNs to
control the mini-vehicle, which learn in the target environment by imitating a
computer vision algorithm, i.e., the expert. Thus, the tinyCNNs, having only
access to an on-board fast-rate linear camera, gain robustness to lighting
conditions and improve over time. Further, we leverage GAP8, a parallel
ultra-low-power RISC-V SoC, to meet the inference requirements. When running
the family of CNNs, our GAP8's solution outperforms any other implementation on
the STM32L4 and NXP k64f (Cortex-M4), reducing the latency by over 13x and the
energy consummation by 92%.
Related papers
- Evaluating Robustness of Reinforcement Learning Algorithms for Autonomous Shipping [2.9109581496560044]
This paper examines the robustness of benchmark deep reinforcement learning (RL) algorithms, implemented for inland waterway transport (IWT) within an autonomous shipping simulator.
We show that a model-free approach can achieve an adequate policy in the simulator, successfully navigating port environments never encountered during training.
arXiv Detail & Related papers (2024-11-07T17:55:07Z) - Training on the Fly: On-device Self-supervised Learning aboard Nano-drones within 20 mW [52.280742520586756]
Miniaturized cyber-physical systems (CPSes) powered by tiny machine learning (TinyML), such as nano-drones, are becoming an increasingly attractive technology.
Simple electronics make these CPSes inexpensive, but strongly limit the computational, memory, and sensing resources available on board.
We present a novel on-device fine-tuning approach that relies only on the limited ultra-low power resources available aboard nano-drones.
arXiv Detail & Related papers (2024-08-06T13:11:36Z) - NaviSlim: Adaptive Context-Aware Navigation and Sensing via Dynamic Slimmable Networks [2.145904182587639]
NaviSlim is a new class of neural navigation models capable of adapting the amount of resources spent on computing and sensing.
NaviSlim is designed as a gated slimmable neural network architecture that, different from existing slimmable networks, can dynamically select a slimming factor to autonomously scale model complexity.
We evaluate our NaviSlim models on scenarios with varying difficulty and a test set that showed a dynamic reduced model complexity on average between 57-92%, and between 61-80% sensor utilization.
arXiv Detail & Related papers (2024-05-16T01:18:52Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Penalty-Based Imitation Learning With Cross Semantics Generation Sensor
Fusion for Autonomous Driving [1.2749527861829049]
In this paper, we provide a penalty-based imitation learning approach to integrate multiple modalities of information.
We observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser.
Our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%.
arXiv Detail & Related papers (2023-03-21T14:29:52Z) - Learned Risk Metric Maps for Kinodynamic Systems [54.49871675894546]
We present Learned Risk Metric Maps for real-time estimation of coherent risk metrics of high dimensional dynamical systems.
LRMM models are simple to design and train, requiring only procedural generation of obstacle sets, state and control sampling, and supervised training of a function approximator.
arXiv Detail & Related papers (2023-02-28T17:51:43Z) - LaneSNNs: Spiking Neural Networks for Lane Detection on the Loihi
Neuromorphic Processor [12.47874622269824]
We present a new SNN-based approach, called LaneSNN, for detecting the lanes marked on the streets using the event-based camera input.
We implement and map the learned SNNs models onto the Intel Loihi Neuromorphic Research Chip.
For the loss function, we develop a novel method based on the linear composition of Weighted binary Cross Entropy (WCE) and Mean Squared Error (MSE) measures.
arXiv Detail & Related papers (2022-08-03T14:51:15Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Energy-efficient and Privacy-aware Social Distance Monitoring with
Low-resolution Infrared Sensors and Adaptive Inference [4.158182639870093]
Low-resolution infrared (IR) sensors can be leveraged to implement privacy-preserving social distance monitoring solutions in indoor spaces.
We propose an energy-efficient adaptive inference solution consisting of a cascade of a simple wake-up trigger and a 8-bit quantized Convolutional Neural Network (CNN)
We show that, when processing the output of a 8x8 low-resolution IR sensor, we are able to reduce the energy consumption by 37-57% with respect to a static CNN-based approach.
arXiv Detail & Related papers (2022-04-22T07:07:38Z) - A Software Architecture for Autonomous Vehicles: Team LRM-B Entry in the
First CARLA Autonomous Driving Challenge [49.976633450740145]
This paper presents the architecture design for the navigation of an autonomous vehicle in a simulated urban environment.
Our architecture was made towards meeting the requirements of CARLA Autonomous Driving Challenge.
arXiv Detail & Related papers (2020-10-23T18:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.