Dynamic Neural Field Modeling of Visual Contrast for Perceiving Incoherent Looming
- URL: http://arxiv.org/abs/2504.04551v1
- Date: Sun, 06 Apr 2025 17:04:14 GMT
- Title: Dynamic Neural Field Modeling of Visual Contrast for Perceiving Incoherent Looming
- Authors: Ziyan Qin, Qinbing Fu, Jigen Peng, Shigang Yue,
- Abstract summary: Amari's Dynamic Neural Field (DNF) framework provides a brain-inspired approach to modeling the average activation of neuronal groups.<n>We extend DNF by incorporating the modeling of ON/OFF visual contrast, each governed by a dedicated DNF.<n>We show that the proposed model effectively addresses incoherent looming detection challenges and significantly outperforms state-of-the-art locust-inspired models.
- Score: 7.885957968654851
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Amari's Dynamic Neural Field (DNF) framework provides a brain-inspired approach to modeling the average activation of neuronal groups. Leveraging a single field, DNF has become a promising foundation for low-energy looming perception module in robotic applications. However, the previous DNF methods face significant challenges in detecting incoherent or inconsistent looming features--conditions commonly encountered in real-world scenarios, such as collision detection in rainy weather. Insights from the visual systems of fruit flies and locusts reveal encoding ON/OFF visual contrast plays a critical role in enhancing looming selectivity. Additionally, lateral excitation mechanism potentially refines the responses of loom-sensitive neurons to both coherent and incoherent stimuli. Together, these offer valuable guidance for improving looming perception models. Building on these biological evidence, we extend the previous single-field DNF framework by incorporating the modeling of ON/OFF visual contrast, each governed by a dedicated DNF. Lateral excitation within each ON/OFF-contrast field is formulated using a normalized Gaussian kernel, and their outputs are integrated in the Summation field to generate collision alerts. Experimental evaluations show that the proposed model effectively addresses incoherent looming detection challenges and significantly outperforms state-of-the-art locust-inspired models. It demonstrates robust performance across diverse stimuli, including synthetic rain effects, underscoring its potential for reliable looming perception in complex, noisy environments with inconsistent visual cues.
Related papers
- Motion-Enhanced Nonlocal Similarity Implicit Neural Representation for Infrared Dim and Small Target Detection [9.459649691992377]
Infrared dim and small target detection presents a significant challenge due to dynamic multi-frame scenarios and weak target signatures.
Traditional low-rank plus sparse models often fail to capture dynamic backgrounds and global spatial-temporal correlations.
We propose a novel motion-enhanced nonlocal similarity implicit neural representation framework to address these challenges.
arXiv Detail & Related papers (2025-04-22T07:42:00Z) - Multi-Modality Driven LoRA for Adverse Condition Depth Estimation [61.525312117638116]
We propose Multi-Modality Driven LoRA (MMD-LoRA) for Adverse Condition Depth Estimation.<n>It consists of two core components: Prompt Driven Domain Alignment (PDDA) and Visual-Text Consistent Contrastive Learning (VTCCL)<n>It achieves state-of-the-art performance on the nuScenes and Oxford RobotCar datasets.
arXiv Detail & Related papers (2024-12-28T14:23:58Z) - A Tunable Despeckling Neural Network Stabilized via Diffusion Equation [15.996302571895045]
Adrialversa attacks can be used as a criterion for judging the adaptability of neural networks to real data.<n>We propose a tunable, regularized neural network framework that unrolls a shallow denoising neural network block and a diffusion regularity block into a single network for end-to-end training.
arXiv Detail & Related papers (2024-11-24T17:08:43Z) - Free Energy Projective Simulation (FEPS): Active inference with interpretability [40.11095094521714]
Free Energy Projective Simulation (FEP) and active inference (AIF) have achieved many successes.
Recent work has focused on improving such agents' performance in complex environments by incorporating the latest machine learning techniques.
We introduce Free Energy Projective Simulation (FEPS) to model agents in an interpretable way without deep neural networks.
arXiv Detail & Related papers (2024-11-22T15:01:44Z) - Attraction-Repulsion Swarming: A Generalized Framework of t-SNE via Force Normalization and Tunable Interactions [2.3020018305241337]
ARS is a framework that is based on viewing the t-distributed data neighbor embedding (t-SNE) visualization technique as a swarm of interacting agents driven by attraction and repulsion forces.
ARS also includes the ability to separately tune the attraction and repulsion kernels, which gives the user control over the tightness within clusters and the spacing between them in the visualization.
arXiv Detail & Related papers (2024-11-15T22:42:11Z) - Integrated Dynamic Phenological Feature for Remote Sensing Image Land Cover Change Detection [5.109855690325439]
We introduce the InPhea model, which integrates phenological features into a remote sensing image CD framework.
A constrainer with four constraint modules and a multi-stage contrastive learning approach is employed to aid in the model's understanding of phenological characteristics.
Experiments on the HRSCD, SECD, and PSCD-Wuhan datasets reveal that InPhea outperforms other models.
arXiv Detail & Related papers (2024-08-08T01:07:28Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable
Novel View Synthesis [90.03590032170169]
We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based neural rendering method.
Our experiments and editing samples on both object-specific/room-scale scenes and synthetic/real-word data demonstrate that we can obtain consistent intrinsic decomposition results.
arXiv Detail & Related papers (2022-10-02T22:45:11Z) - FG-UAP: Feature-Gathering Universal Adversarial Perturbation [15.99512720802142]
We propose to generate Universal Adversarial Perturbation (UAP) by attacking the layer where Neural Collapse (NC) happens.
Because of NC, the proposed attack could gather all the natural images' features to its surrounding, which is hence called Feature-Gathering UAP (FG-UAP)
We evaluate the effectiveness of our proposed algorithm on abundant experiments, including untargeted and targeted universal attacks, attacks under limited dataset, and transfer-based black-box attacks.
arXiv Detail & Related papers (2022-09-27T02:03:42Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.