Active Neural Mapping
- URL: http://arxiv.org/abs/2308.16246v1
- Date: Wed, 30 Aug 2023 18:07:30 GMT
- Title: Active Neural Mapping
- Authors: Zike Yan, Haoxiang Yang, Hongbin Zha
- Abstract summary: We address the problem of active mapping with a continually-learned neural scene representation, namely Active Neural Mapping.
We present for the first time an active mapping system with a coordinate-based implicit neural representation for online scene reconstruction.
- Score: 20.242598287146578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of active mapping with a continually-learned neural
scene representation, namely Active Neural Mapping. The key lies in actively
finding the target space to be explored with efficient agent movement, thus
minimizing the map uncertainty on-the-fly within a previously unseen
environment. In this paper, we examine the weight space of the
continually-learned neural field, and show empirically that the neural
variability, the prediction robustness against random weight perturbation, can
be directly utilized to measure the instant uncertainty of the neural map.
Together with the continuous geometric information inherited in the neural map,
the agent can be guided to find a traversable path to gradually gain knowledge
of the environment. We present for the first time an active mapping system with
a coordinate-based implicit neural representation for online scene
reconstruction. Experiments in the visually-realistic Gibson and Matterport3D
environment demonstrate the efficacy of the proposed method.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Finding Concept Representations in Neural Networks with Self-Organizing
Maps [2.817412580574242]
We show how self-organizing maps can be used to inspect how activation of layers of neural networks correspond to neural representations of abstract concepts.
We show that, among the measures tested, the relative entropy of the activation map for a concept is a suitable candidate and can be used as part of a methodology to identify and locate the neural representation of a concept.
arXiv Detail & Related papers (2023-12-10T12:10:34Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - An Inter-observer consistent deep adversarial training for visual
scanpath prediction [66.46953851227454]
We propose an inter-observer consistent adversarial training approach for scanpath prediction through a lightweight deep neural network.
We show the competitiveness of our approach in regard to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-14T13:22:29Z) - Neural Activation Patterns (NAPs): Visual Explainability of Learned
Concepts [8.562628320010035]
We present a method that takes into account the entire activation distribution.
By extracting similar activation profiles within the high-dimensional activation space of a neural network layer, we find groups of inputs that are treated similarly.
These input groups represent neural activation patterns (NAPs) and can be used to visualize and interpret learned layer concepts.
arXiv Detail & Related papers (2022-06-20T09:05:57Z) - Emergent organization of receptive fields in networks of excitatory and
inhibitory neurons [3.674863913115431]
Motivated by a leaky integrate-and-fire model of neural waves, we propose an activation model that is more typical of artificial neural networks.
Experiments with a synthetic model of somatosensory input are used to investigate how the network dynamics may affect plasticity of neuronal maps under changes to the inputs.
arXiv Detail & Related papers (2022-05-26T20:43:14Z) - Learnable latent embeddings for joint behavioral and neural analysis [3.6062449190184136]
We show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, and rapid, high-accuracy decoding of natural movies from visual cortex.
We validate its accuracy and demonstrate its utility for both calcium and electrophysiology datasets, across sensory and motor tasks, and in simple or complex behaviors across species.
arXiv Detail & Related papers (2022-04-01T19:19:33Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z) - Deep Cross-Subject Mapping of Neural Activity [33.25686697879346]
We show that a neural decoder trained on neural activity signals of one subject can be used to robustly decode the motor intentions of a different subject.
The findings reported in this paper are an important step towards the development of cross-subject brain-computer.
arXiv Detail & Related papers (2020-07-13T14:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.