Complementary Visual Neuronal Systems Model for Collision Sensing
- URL: http://arxiv.org/abs/2006.06431v1
- Date: Thu, 11 Jun 2020 13:40:59 GMT
- Title: Complementary Visual Neuronal Systems Model for Collision Sensing
- Authors: Qinbing Fu and Shigang Yue
- Abstract summary: Inspired by insects' visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing.
Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate cells (Cs) in flies, have been studied, intensively.
We introduce a hybrid model combining two LGMDs with horizontally (rightward and leftward) sensitive LPTCs (C-R and LPTC-L) specialising in fast collision perception.
- Score: 6.670414650224423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by insects' visual brains, this paper presents original modelling of
a complementary visual neuronal systems model for real-time and robust
collision sensing. Two categories of wide-field motion sensitive neurons, i.e.,
the lobula giant movement detectors (LGMDs) in locusts and the lobula plate
tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs
have specific selectivity to approaching objects in depth that threaten
collision; whilst the LPTCs are only sensitive to translating objects in
horizontal and vertical directions. Though each has been modelled and applied
in various visual scenes including robot scenarios, little has been done on
investigating their complementary functionality and selectivity when
functioning together. To fill this vacancy, we introduce a hybrid model
combining two LGMDs (LGMD-1 and LGMD-2) with horizontally (rightward and
leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision
perception. With coordination and competition between different activated
neurons, the proximity feature by frontal approaching stimuli can be largely
sharpened up by suppressing translating and receding motions. The proposed
method has been implemented in ground micro-mobile robots as embedded systems.
The multi-robot experiments have demonstrated the effectiveness and robustness
of the proposed model for frontal collision sensing, which outperforms previous
single-type neuron computation methods against translating interference.
Related papers
- DA-Flow: Dual Attention Normalizing Flow for Skeleton-based Video Anomaly Detection [52.74152717667157]
We propose a lightweight module called Dual Attention Module (DAM) for capturing cross-dimension interaction relationships in-temporal skeletal data.
It employs the frame attention mechanism to identify the most significant frames and the skeleton attention mechanism to capture broader relationships across fixed partitions with minimal parameters and flops.
arXiv Detail & Related papers (2024-06-05T06:18:03Z) - Integrating GNN and Neural ODEs for Estimating Two-Body Interactions in Mixed-Species Collective Motion [0.0]
We present a novel deep learning framework to estimate the underlying equations of motion from observed trajectories.
Our framework integrates graph neural networks with neural differential equations, enabling effective prediction of two-body interactions.
arXiv Detail & Related papers (2024-05-26T09:47:17Z) - Neural-Logic Human-Object Interaction Detection [67.4993347702353]
We present L OGIC HOI, a new HOI detector that leverages neural-logic reasoning and Transformer to infer feasible interactions between entities.
Specifically, we modify the self-attention mechanism in vanilla Transformer, enabling it to reason over the human, action, object> triplet and constitute novel interactions.
We formulate these two properties in first-order logic and ground them into continuous space to constrain the learning process of our approach, leading to improved performance and zero-shot generalization capabilities.
arXiv Detail & Related papers (2023-11-16T11:47:53Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - OppLoD: the Opponency based Looming Detector, Model Extension of Looming
Sensitivity from LGMD to LPLC2 [8.055723903012511]
Looming detection plays an important role in insect collision prevention systems.
A critical visual motion cue has been long neglected because it is so easy to be confused with expansion, that is radial-opponent-motion (ROM)
Recent research on the discovery of LPLC2, a ROM-sensitive neuron in Drosophila, has revealed its ultra-selectivity because it only responds to stimuli with focal, outward movement.
In this paper, we investigate the potential to extend an image velocity-based looming detector, the lobula giant movement detector (LGMD), with ROM-sensibility.
arXiv Detail & Related papers (2023-02-10T03:53:12Z) - Drone Flocking Optimization using NSGA-II and Principal Component
Analysis [0.8495139954994114]
Individual agents in natural systems like flocks of birds or schools of fish display a remarkable ability to coordinate and communicate in local groups.
Emulating such natural systems into drone swarms to solve problems in defence, agriculture, industry automation and humanitarian relief is an emerging technology.
optimized flocking of drones in a confined environment with multiple conflicting objectives is proposed.
arXiv Detail & Related papers (2022-05-01T09:24:01Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Neural Monocular 3D Human Motion Capture with Physical Awareness [76.55971509794598]
We present a new trainable system for physically plausible markerless 3D human motion capture.
Unlike most neural methods for human motion capture, our approach is aware of physical and environmental constraints.
It produces smooth and physically principled 3D motions in an interactive frame rate in a wide variety of challenging scenes.
arXiv Detail & Related papers (2021-05-03T17:57:07Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z) - A Bioinspired Approach-Sensitive Neural Network for Collision Detection
in Cluttered and Dynamic Backgrounds [19.93930316898735]
Rapid accurate and robust detection of looming objects in moving backgrounds is a significant and challenging problem for robotic visual systems.
Inspired by the neural circuit elementary vision in the mammalian retina, this paper proposes a bioinspired approach-sensitive neural network (AS)
The proposed model is able to not only detect collision accurately and robustly in cluttered and dynamic backgrounds but also extract more collision information like position and direction, for guiding rapid decision making.
arXiv Detail & Related papers (2021-03-01T09:16:18Z) - Enhancing LGMD's Looming Selectivity for UAV with Spatial-temporal
Distributed Presynaptic Connections [5.023891066282676]
In nature, flying insects with simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments.
As a flying insect's visual neuron, LGMD is considered to be an ideal basis for building UAV's collision detecting system.
Existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements.
We propose a new model implementing distributed spatial-temporal synaptic interactions.
arXiv Detail & Related papers (2020-05-09T09:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.