Gesture Recognition for FMCW Radar on the Edge
- URL: http://arxiv.org/abs/2310.08876v2
- Date: Fri, 26 Jan 2024 04:17:09 GMT
- Title: Gesture Recognition for FMCW Radar on the Edge
- Authors: Maximilian Strobel, Stephan Schoenfeldt, Jonas Daugalas
- Abstract summary: We show that gestures can be characterized efficiently by a set of five features.
A recurrent neural network (RNN) based architecture exploits these features to jointly detect and classify five different gestures.
The proposed system recognizes gestures with an F1 score of 98.4% on our hold-out test dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a lightweight gesture recognition system based on 60
GHz frequency modulated continuous wave (FMCW) radar. We show that gestures can
be characterized efficiently by a set of five features, and propose a slim
radar processing algorithm to extract these features. In contrast to previous
approaches, we avoid heavy 2D processing, i.e. range-Doppler imaging, and
perform instead an early target detection - this allows us to port the system
to fully embedded platforms with tight constraints on memory, compute and power
consumption. A recurrent neural network (RNN) based architecture exploits these
features to jointly detect and classify five different gestures. The proposed
system recognizes gestures with an F1 score of 98.4% on our hold-out test
dataset, it runs on an Arm Cortex-M4 microcontroller requiring less than 280 kB
of flash memory, 120 kB of RAM, and consuming 75 mW of power.
Related papers
- FERT: Real-Time Facial Expression Recognition with Short-Range FMCW Radar [0.0]
This study proposes a novel approach for real-time facial expression recognition utilizing short-range Frequency-Modulated Continuous-Wave (FMCW) radar equipped with one transmit (Tx), and three receive (Rx) antennas.
The proposed solution operates in real-time in a person-independent manner, which shows the potential use of low-cost FMCW radars for effective facial expression recognition in various applications.
arXiv Detail & Related papers (2024-11-18T14:48:06Z) - Q-Segment: Segmenting Images In-Sensor for Vessel-Based Medical
Diagnosis [13.018482089796159]
We present "Q-Segment", a quantized real-time segmentation algorithm, and conduct a comprehensive evaluation on a low-power edge vision platform with the Sony IMX500.
Q-Segment achieves ultra-low inference time in-sensor only 0.23 ms and power consumption of only 72mW.
This research contributes valuable insights into edge-based image segmentation, laying the foundation for efficient algorithms tailored to low-power environments.
arXiv Detail & Related papers (2023-12-15T15:01:41Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - Hand gesture recognition using 802.11ad mmWave sensor in the mobile
device [2.5476515662939563]
We explore the feasibility of AI assisted hand-gesture recognition using 802.11ad 60GHz (mmWave) technology in smartphones.
We built a prototype system, where radar sensing and communication waveform can coexist by time-division duplex (TDD)
It can gather sensing data and predict gestures within 100 milliseconds.
arXiv Detail & Related papers (2022-11-14T03:36:17Z) - Cross-modal Learning of Graph Representations using Radar Point Cloud
for Long-Range Gesture Recognition [6.9545038359818445]
We propose a novel architecture for a long-range (1m - 2m) gesture recognition solution.
We use a point cloud-based cross-learning approach from camera point cloud to 60-GHz FMCW radar point cloud.
In the experimental results section, we demonstrate our model's overall accuracy of 98.4% for five gestures and its generalization capability.
arXiv Detail & Related papers (2022-03-31T14:34:36Z) - TinyRadarNN: Combining Spatial and Temporal Convolutional Neural
Networks for Embedded Gesture Recognition with Short Range Radars [13.266626571886354]
This work proposes a low-power high-accuracy embedded hand-gesture recognition algorithm targeting battery-operated wearable devices.
A 2D Convolutional Neural Network (CNN) using range frequency Doppler features is combined with a Temporal Convolutional Neural Network (TCN) for time sequence prediction.
arXiv Detail & Related papers (2020-06-25T15:23:21Z) - Near-chip Dynamic Vision Filtering for Low-Bandwidth Pedestrian
Detection [99.94079901071163]
This paper presents a novel end-to-end system for pedestrian detection using Dynamic Vision Sensors (DVSs)
We target applications where multiple sensors transmit data to a local processing unit, which executes a detection algorithm.
Our detector is able to perform a detection every 450 ms, with an overall testing F1 score of 83%.
arXiv Detail & Related papers (2020-04-03T17:36:26Z) - ASFD: Automatic and Scalable Face Detector [129.82350993748258]
We propose a novel Automatic and Scalable Face Detector (ASFD)
ASFD is based on a combination of neural architecture search techniques as well as a new loss design.
Our ASFD-D6 outperforms the prior strong competitors, and our lightweight ASFD-D0 runs at more than 120 FPS with Mobilenet for VGA-resolution images.
arXiv Detail & Related papers (2020-03-25T06:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.