Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event
Cameras
- URL: http://arxiv.org/abs/2308.07802v1
- Date: Tue, 15 Aug 2023 14:27:46 GMT
- Title: Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event
Cameras
- Authors: Paul Kielty, Cian Ryan, Mehdi Sefidgar Dilmaghani, Waseem Shariff, Joe
Lemley, Peter Corcoran
- Abstract summary: This research provides a proof of concept to expand event-based DMS techniques to include seatbelt state detection.
In a binary classification task, the fastened/unfastened frames were identified with an F1 score of 0.989 and 0.944 on the simulated and real test sets respectively.
- Score: 0.932065750652415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic vision sensors, or event cameras, differ from conventional
cameras in that they do not capture images at a specified rate. Instead, they
asynchronously log local brightness changes at each pixel. As a result, event
cameras only record changes in a given scene, and do so with very high temporal
resolution, high dynamic range, and low power requirements. Recent research has
demonstrated how these characteristics make event cameras extremely practical
sensors in driver monitoring systems (DMS), enabling the tracking of high-speed
eye motion and blinks. This research provides a proof of concept to expand
event-based DMS techniques to include seatbelt state detection. Using an event
simulator, a dataset of 108,691 synthetic neuromorphic frames of car occupants
was generated from a near-infrared (NIR) dataset, and split into training,
validation, and test sets for a seatbelt state detection algorithm based on a
recurrent convolutional neural network (CNN). In addition, a smaller set of
real event data was collected and reserved for testing. In a binary
classification task, the fastened/unfastened frames were identified with an F1
score of 0.989 and 0.944 on the simulated and real test sets respectively. When
the problem extended to also classify the action of fastening/unfastening the
seatbelt, respective F1 scores of 0.964 and 0.846 were achieved.
Related papers
- BlinkTrack: Feature Tracking over 100 FPS via Events and Images [50.98675227695814]
We propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking.
Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches.
Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods.
arXiv Detail & Related papers (2024-09-26T15:54:18Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - Real-Time Driver Monitoring Systems through Modality and View Analysis [28.18784311981388]
Driver distractions are known to be the dominant cause of road accidents.
State-of-the-art methods prioritize accuracy while ignoring latency.
We propose time-effective detection models by neglecting the temporal relation between video frames.
arXiv Detail & Related papers (2022-10-17T21:22:41Z) - Traffic Sign Detection With Event Cameras and DCNN [0.0]
Event cameras (DVS) have been used in vision systems as an alternative or supplement to traditional cameras.
In this work, we test whether these rather novel sensors can be applied to the popular task of traffic sign detection.
arXiv Detail & Related papers (2022-07-27T08:01:54Z) - Moving Object Detection for Event-based vision using Graph Spectral
Clustering [6.354824287948164]
Moving object detection has been a central topic of discussion in computer vision for its wide range of applications.
We present an unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data.
We additionally show how the optimum number of moving objects can be automatically determined.
arXiv Detail & Related papers (2021-09-30T10:19:22Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - An Efficient Approach for Anomaly Detection in Traffic Videos [30.83924581439373]
We propose an efficient approach for a video anomaly detection system which is capable of running at the edge devices.
The proposed approach comprises a pre-processing module that detects changes in the scene and removes the corrupted frames.
We also propose a sequential change detection algorithm that can quickly adapt to a new scene and detect changes in the similarity statistic.
arXiv Detail & Related papers (2021-04-20T04:43:18Z) - Real-Time Face & Eye Tracking and Blink Detection using Event Cameras [3.842206880015537]
Event cameras contain emerging, neuromorphic vision sensors that capture local light intensity changes at each pixel, generating a stream of asynchronous events.
Driver monitoring systems (DMS) are in-cabin safety systems designed to sense and understand a drivers physical and cognitive state.
This paper proposes a novel method to simultaneously detect and track faces and eyes for driver monitoring.
arXiv Detail & Related papers (2020-10-16T10:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.