Retina-inspired Object Motion Segmentation
- URL: http://arxiv.org/abs/2408.09454v1
- Date: Sun, 18 Aug 2024 12:28:26 GMT
- Title: Retina-inspired Object Motion Segmentation
- Authors: Victoria Clerico, Shay Snyder, Arya Lohia, Md Abdullah-Al Kaiser, Gregory Schwartz, Akhilesh Jaiswal, Maryam Parsa,
- Abstract summary: Dynamic Vision Sensors (DVS) have emerged as a revolutionary technology with a high temporal resolution that far surpasses RGB cameras.
This paper introduces a bio-inspired computer vision method that dramatically reduces the number of parameters by a factor of 1000 compared to prior works.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic Vision Sensors (DVS) have emerged as a revolutionary technology with a high temporal resolution that far surpasses RGB cameras. DVS technology draws biological inspiration from photoreceptors and the initial retinal synapse. Our research showcases the potential of additional retinal functionalities to extract visual features. We provide a domain-agnostic and efficient algorithm for ego-motion compensation based on Object Motion Sensitivity (OMS), one of the multiple robust features computed within the mammalian retina. We develop a framework based on experimental neuroscience that translates OMS' biological circuitry to a low-overhead algorithm. OMS processes DVS data from dynamic scenes to perform pixel-wise object motion segmentation. Using a real and a synthetic dataset, we highlight OMS' ability to differentiate object motion from ego-motion, bypassing the need for deep networks. This paper introduces a bio-inspired computer vision method that dramatically reduces the number of parameters by a factor of 1000 compared to prior works. Our work paves the way for robust, high-speed, and low-bandwidth decision-making for in-sensor computations.
Related papers
- Hardware-Algorithm Re-engineering of Retinal Circuit for Intelligent Object Motion Segmentation [0.0]
We focus on a fundamental visual feature within the mammalian retina, Object Motion Sensitivity (OMS)
We present novel CMOS circuits that implement OMS functionality inside image sensors.
We verify the functionality and re-configurability of the proposed CMOS circuit designs through Cadence simulations in 180nm technology.
arXiv Detail & Related papers (2024-07-31T20:35:11Z) - TSOM: Small Object Motion Detection Neural Network Inspired by Avian Visual Circuit [4.640328175695991]
The Retina-OT-Rt visual circuit is highly sensitive to capturing the motion information of small objects from high altitudes.
We propose a novel tectum small object motion detection neural network (TSOM)
The TSOM is biologically interpretable and effective in extracting reliable small object motion features from complex high-altitude backgrounds.
arXiv Detail & Related papers (2024-04-01T01:49:08Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Modelling Human Visual Motion Processing with Trainable Motion Energy
Sensing and a Self-attention Network [1.9458156037869137]
We propose an image-computable model of human motion perception by bridging the gap between biological and computer vision models.
This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system.
In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning.
arXiv Detail & Related papers (2023-05-16T04:16:07Z) - Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion
Problem for Event-based Cameras [0.0]
We highlight the capability of the second generation of neuromorphic image sensors, Integrated Retinal Functionality in CMOS Image Sensors (IRIS)
IRIS aims to mimic full retinal computations from photoreceptors to output of the retina for targeted feature-extraction.
Our results show that OMS can accomplish standard computer vision tasks with similar efficiency to conventional RGB and DVS solutions but offers drastic bandwidth reduction.
arXiv Detail & Related papers (2023-03-24T16:22:06Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Neuromorphic Computing and Sensing in Space [69.34740063574921]
Neuromorphic computer chips are designed to mimic the architecture of a biological brain.
The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications.
arXiv Detail & Related papers (2022-12-10T07:46:29Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Visual Odometry with Neuromorphic Resonator Networks [9.903137966539898]
Visual Odometry (VO) is a method to estimate self-motion of a mobile robot using visual sensors.
Neuromorphic hardware offers low-power solutions to many vision and AI problems.
We present a modular neuromorphic algorithm that achieves state-of-the-art performance on two-dimensional VO tasks.
arXiv Detail & Related papers (2022-09-05T14:57:03Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.