Enabling energy efficient machine learning on a Ultra-Low-Power vision
sensor for IoT
- URL: http://arxiv.org/abs/2102.01340v1
- Date: Tue, 2 Feb 2021 06:39:36 GMT
- Title: Enabling energy efficient machine learning on a Ultra-Low-Power vision
sensor for IoT
- Authors: Francesco Paissan, Massimo Gottardi, Elisabetta Farella
- Abstract summary: This paper presents the development, analysis, and embedded implementation of a realtime detection, classification and tracking pipeline.
The power consumption obtained for the inference - which requires 8ms - is 7.5 mW.
- Score: 3.136861161060886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Internet of Things (IoT) and smart city paradigm includes ubiquitous
technology to extract context information in order to return useful services to
users and citizens. An essential role in this scenario is often played by
computer vision applications, requiring the acquisition of images from specific
devices. The need for high-end cameras often penalizes this process since they
are power-hungry and ask for high computational resources to be processed.
Thus, the availability of novel low-power vision sensors, implementing advanced
features like in-hardware motion detection, is crucial for computer vision in
the IoT domain. Unfortunately, to be highly energy-efficient, these sensors
might worsen the perception performance (e.g., resolution, frame rate, color).
Therefore, domain-specific pipelines are usually delivered in order to exploit
the full potential of these cameras. This paper presents the development,
analysis, and embedded implementation of a realtime detection, classification
and tracking pipeline able to exploit the full potential of background
filtering Smart Vision Sensors (SVS). The power consumption obtained for the
inference - which requires 8ms - is 7.5 mW.
Related papers
- Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors [5.824962833043625]
Video-based computer vision applications typically suffer from high energy consumption due to reading and processing all pixels in a frame, regardless of their significance.
Previous works have attempted to reduce this energy by skipping input patches or pixels and using feedback from the end task to guide the skipping algorithm.
This paper presents a custom-designed CMOS image sensor (CIS) system that improves energy efficiency by selectively skipping uneventful regions or rows within a frame during the sensor's readout phase.
arXiv Detail & Related papers (2024-09-25T20:32:55Z) - Event-based vision on FPGAs -- a survey [0.0]
Field programmable gate Arrays (FPGAs) have enabled fast data processing (even in real-time) and energy efficiency.
This paper gives an overview of the most important works, where FPGAs have been used in different contexts to process event data.
It covers applications in the following areas: filtering, stereovision, optical flow, acceleration of AI-based algorithms for object classification, detection and tracking, and applications in robotics and inspection systems.
arXiv Detail & Related papers (2024-07-11T10:07:44Z) - FAPNet: An Effective Frequency Adaptive Point-based Eye Tracker [0.6554326244334868]
Event cameras are an alternative to traditional cameras in the realm of eye tracking.
Existing event-based eye tracking networks neglect the pivotal sparse and fine-grained temporal information in events.
In this paper, we utilize Point Cloud as the event representation to harness the high temporal resolution and sparse characteristics of events in eye tracking tasks.
arXiv Detail & Related papers (2024-06-05T12:08:01Z) - Fully Quantized Always-on Face Detector Considering Mobile Image Sensors [12.806584794505751]
Current face detectors do not fully meet the requirements for "intelligent" CMOS image sensors integrated with embedded DNNs.
In this study, we aim to bridge the gap by exploring extremely low-bit lightweight face detectors.
arXiv Detail & Related papers (2023-11-02T05:35:49Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Decisive Data using Multi-Modality Optical Sensors for Advanced
Vehicular Systems [1.3315340349412819]
This paper focuses on various optical technologies for design and development of state-of-the-art out-cabin forward vision systems and in-cabin driver monitoring systems.
The focused optical sensors include Longwave Thermal Imaging (LWIR) cameras, Near Infrared (NIR), Neuromorphic/ event cameras, Visible CMOS cameras and Depth cameras.
arXiv Detail & Related papers (2023-07-25T16:03:47Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Meta-UDA: Unsupervised Domain Adaptive Thermal Object Detection using
Meta-Learning [64.92447072894055]
Infrared (IR) cameras are robust under adverse illumination and lighting conditions.
We propose an algorithm meta-learning framework to improve existing UDA methods.
We produce a state-of-the-art thermal detector for the KAIST and DSIAC datasets.
arXiv Detail & Related papers (2021-10-07T02:28:18Z) - Learning, Computing, and Trustworthiness in Intelligent IoT
Environments: Performance-Energy Tradeoffs [62.91362897985057]
An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications.
This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption.
arXiv Detail & Related papers (2021-10-04T19:41:42Z) - Energy Drain of the Object Detection Processing Pipeline for Mobile
Devices: Analysis and Implications [77.00418462388525]
This paper presents the first detailed experimental study of a mobile augmented reality (AR) client's energy consumption and the detection latency of executing Convolutional Neural Networks (CNN) based object detection.
Our detailed measurements refine the energy analysis of mobile AR clients and reveal several interesting perspectives regarding the energy consumption of executing CNN-based object detection.
arXiv Detail & Related papers (2020-11-26T00:32:07Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.