Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion
Problem for Event-based Cameras
- URL: http://arxiv.org/abs/2303.14114v3
- Date: Fri, 14 Apr 2023 21:43:46 GMT
- Title: Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion
Problem for Event-based Cameras
- Authors: Shay Snyder (1), Hunter Thompson (2), Md Abdullah-Al Kaiser (3),
Gregory Schwartz (4), Akhilesh Jaiswal (3), and Maryam Parsa (1) ((1) George
Mason University, (2) Georgia Institute of Technology, (3) University of
Southern California, (4) Northwestern University)
- Abstract summary: We highlight the capability of the second generation of neuromorphic image sensors, Integrated Retinal Functionality in CMOS Image Sensors (IRIS)
IRIS aims to mimic full retinal computations from photoreceptors to output of the retina for targeted feature-extraction.
Our results show that OMS can accomplish standard computer vision tasks with similar efficiency to conventional RGB and DVS solutions but offers drastic bandwidth reduction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic (event-based) image sensors draw inspiration from the
human-retina to create an electronic device that can process visual stimuli in
a way that closely resembles its biological counterpart. These sensors process
information significantly different than the traditional RGB sensors.
Specifically, the sensory information generated by event-based image sensors
are orders of magnitude sparser compared to that of RGB sensors. The first
generation of neuromorphic image sensors, Dynamic Vision Sensor (DVS), are
inspired by the computations confined to the photoreceptors and the first
retinal synapse. In this work, we highlight the capability of the second
generation of neuromorphic image sensors, Integrated Retinal Functionality in
CMOS Image Sensors (IRIS), which aims to mimic full retinal computations from
photoreceptors to output of the retina (retinal ganglion cells) for targeted
feature-extraction. The feature of choice in this work is Object Motion
Sensitivity (OMS) that is processed locally in the IRIS sensor. Our results
show that OMS can accomplish standard computer vision tasks with similar
efficiency to conventional RGB and DVS solutions but offers drastic bandwidth
reduction. This cuts the wireless and computing power budgets and opens up vast
opportunities in high-speed, robust, energy-efficient, and low-bandwidth
real-time decision making.
Related papers
- Retina-inspired Object Motion Segmentation [0.0]
Dynamic Vision Sensors (DVS) have emerged as a revolutionary technology with a high temporal resolution that far surpasses RGB cameras.
This paper introduces a bio-inspired computer vision method that dramatically reduces the number of parameters by a factor of 1000 compared to prior works.
arXiv Detail & Related papers (2024-08-18T12:28:26Z) - Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation [20.713880984921385]
Evetac is an event-based optical tactile sensor.
We develop touch processing algorithms to process its measurements online at 1000 Hz.
Evetac's output and the marker tracking provide meaningful features for learning data-driven slip detection and prediction models.
arXiv Detail & Related papers (2023-12-02T22:01:49Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized
Perception with Neural Sensors [42.18718773182277]
Conventional image sensors digitize high-resolution images at fast frame rates, producing a large amount of data that needs to be transmitted off the sensor for further processing.
We develop an efficient recurrent neural network architecture, processing PixelRNN, that encodes-temporal features on the sensor using purely binary operations.
PixelRNN reduces the amount data to be transmitted off the sensor by a factor of 64x compared to conventional systems while offering competitive accuracy for hand gesture recognition and lip reading tasks.
arXiv Detail & Related papers (2023-04-11T18:16:47Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Analyzing General-Purpose Deep-Learning Detection and Segmentation
Models with Images from a Lidar as a Camera Sensor [0.06554326244334865]
This work explores the potential of general-purpose DL perception algorithms for processing image-like outputs of advanced lidar sensors.
Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with 360text field of view.
We show that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions.
arXiv Detail & Related papers (2022-03-08T13:14:43Z) - Camera-Based Physiological Sensing: Challenges and Future Directions [5.571184025017747]
We identify four research challenges for the field of camera-based physiological sensing and broader AI driven healthcare communities.
We believe solving these challenges will help deliver accurate, equitable and generalizable AI systems for healthcare.
arXiv Detail & Related papers (2021-10-26T02:30:18Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.