Modelling Drosophila Motion Vision Pathways for Decoding the Direction
of Translating Objects Against Cluttered Moving Backgrounds
- URL: http://arxiv.org/abs/2007.00886v1
- Date: Thu, 2 Jul 2020 05:15:31 GMT
- Title: Modelling Drosophila Motion Vision Pathways for Decoding the Direction
of Translating Objects Against Cluttered Moving Backgrounds
- Authors: Qinbing Fu and Shigang Yue
- Abstract summary: This paper investigates the fruit fly textitDrosophila motion vision pathways and presents computational modelling based on physiological researches.
The proposed visual system model features-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems.
Experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets.
- Score: 6.670414650224423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decoding the direction of translating objects in front of cluttered moving
backgrounds, accurately and efficiently, is still a challenging problem. In
nature, lightweight and low-powered flying insects apply motion vision to
detect a moving target in highly variable environments during flight, which are
excellent paradigms to learn motion perception strategies. This paper
investigates the fruit fly \textit{Drosophila} motion vision pathways and
presents computational modelling based on cutting-edge physiological
researches. The proposed visual system model features bio-plausible ON and OFF
pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS)
systems. The main contributions of this research are on two aspects: 1) the
proposed model articulates the forming of both direction-selective (DS) and
direction-opponent (DO) responses, revealed as principal features of motion
perception neural circuits, in a feed-forward manner; 2) it also shows robust
direction selectivity to translating objects in front of cluttered moving
backgrounds, via the modelling of spatiotemporal dynamics including combination
of motion pre-filtering mechanisms and ensembles of local correlators inside
both the ON and OFF pathways, which works effectively to suppress irrelevant
background motion or distractors, and to improve the dynamic response.
Accordingly, the direction of translating objects is decoded as global
responses of both the HS and VS systems with positive or negative output
indicating preferred-direction (PD) or null-direction (ND) translation. The
experiments have verified the effectiveness of the proposed neural system
model, and demonstrated its responsive preference to faster-moving,
higher-contrast and larger-size targets embedded in cluttered moving
backgrounds.
Related papers
- Estimating Dynamic Flow Features in Groups of Tracked Objects [2.4344640336100936]
This work aims to extend gradient-based dynamical systems analyses to real-world applications characterized by complex, feature-rich image sequences with imperfect tracers.
The proposed approach is affordably implemented and enables advanced studies including the motion analysis of two distinct object classes in a single image sequence.
arXiv Detail & Related papers (2024-08-29T01:06:51Z) - Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - GRA: Detecting Oriented Objects through Group-wise Rotating and Attention [64.21917568525764]
Group-wise Rotating and Attention (GRA) module is proposed to replace the convolution operations in backbone networks for oriented object detection.
GRA can adaptively capture fine-grained features of objects with diverse orientations, comprising two key components: Group-wise Rotating and Group-wise Attention.
GRA achieves a new state-of-the-art (SOTA) on the DOTA-v2.0 benchmark, while saving the parameters by nearly 50% compared to the previous SOTA method.
arXiv Detail & Related papers (2024-03-17T07:29:32Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Adaptive Multi-source Predictor for Zero-shot Video Object Segmentation [68.56443382421878]
We propose a novel adaptive multi-source predictor for zero-shot video object segmentation (ZVOS)
In the static object predictor, the RGB source is converted to depth and static saliency sources, simultaneously.
Experiments show that the proposed model outperforms the state-of-the-art methods on three challenging ZVOS benchmarks.
arXiv Detail & Related papers (2023-03-18T10:19:29Z) - Spatio-Temporal Feedback Control of Small Target Motion Detection Visual
System [9.03311522244788]
This paper develops a visual system withtemporal-temporal feedback to detect small target motion.
The proposed visual system is composed of two complementary spatial neuronalworks.
Experimental results demonstrate that the system is more competitive than existing methods in detecting small targets.
arXiv Detail & Related papers (2022-11-18T10:10:48Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - A Bioinspired Approach-Sensitive Neural Network for Collision Detection
in Cluttered and Dynamic Backgrounds [19.93930316898735]
Rapid accurate and robust detection of looming objects in moving backgrounds is a significant and challenging problem for robotic visual systems.
Inspired by the neural circuit elementary vision in the mammalian retina, this paper proposes a bioinspired approach-sensitive neural network (AS)
The proposed model is able to not only detect collision accurately and robustly in cluttered and dynamic backgrounds but also extract more collision information like position and direction, for guiding rapid decision making.
arXiv Detail & Related papers (2021-03-01T09:16:18Z) - A Bioinspired Retinal Neural Network for Accurately Extracting
Small-Target Motion Information in Cluttered Backgrounds [19.93930316898735]
This paper proposes a bioinspired neural network based on a new neuro-based motion filtering and multiform 2-D spatial filtering.
It can estimate motion direction accurately via only two signals and respond to small targets of different sizes and velocities.
It can also extract the information of motion direction and motion accurately energy and rapidly.
arXiv Detail & Related papers (2021-03-01T08:44:27Z) - Drosophila-Inspired 3D Moving Object Detection Based on Point Clouds [22.850519892606716]
We have developed a motion detector based on the shallow visual neural pathway of Drosophila.
This detector is sensitive to the movement of objects and can well suppress background noise.
An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories.
arXiv Detail & Related papers (2020-05-06T10:04:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.