On the Road with 16 Neurons: Mental Imagery with Bio-inspired Deep
Neural Networks
- URL: http://arxiv.org/abs/2003.08745v1
- Date: Mon, 9 Mar 2020 16:46:29 GMT
- Title: On the Road with 16 Neurons: Mental Imagery with Bio-inspired Deep
Neural Networks
- Authors: Alice Plebe and Mauro Da Lio
- Abstract summary: We propose a strategy for visual prediction in the context of autonomous driving.
We take inspiration from two theoretical ideas about the human mind and its neural organization.
We learn compact representations that use as few as 16 neural units for each of the two basic driving concepts we consider.
- Score: 4.888591558726117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a strategy for visual prediction in the context of
autonomous driving. Humans, when not distracted or drunk, are still the best
drivers you can currently find. For this reason we take inspiration from two
theoretical ideas about the human mind and its neural organization. The first
idea concerns how the brain uses a hierarchical structure of neuron ensembles
to extract abstract concepts from visual experience and code them into compact
representations. The second idea suggests that these neural perceptual
representations are not neutral but functional to the prediction of the future
state of affairs in the environment. Similarly, the prediction mechanism is not
neutral but oriented to the current planning of a future action. We identify
within the deep learning framework two artificial counterparts of the
aforementioned neurocognitive theories. We find a correspondence between the
first theoretical idea and the architecture of convolutional autoencoders,
while we translate the second theory into a training procedure that learns
compact representations which are not neutral but oriented to driving tasks,
from two distinct perspectives. From a static perspective, we force groups of
neural units in the compact representations to distinctly represent specific
concepts crucial to the driving task. From a dynamic perspective, we encourage
the compact representations to be predictive of how the current road scenario
will change in the future. We successfully learn compact representations that
use as few as 16 neural units for each of the two basic driving concepts we
consider: car and lane. We prove the efficiency of our proposed perceptual
representations on the SYNTHIA dataset. Our source code is available at
https://github.com/3lis/rnn_vae
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Implementing engrams from a machine learning perspective: matching for
prediction [0.0]
We propose how we might design a computer system to implement engrams using neural networks.
Building on autoencoders, we propose latent neural spaces as indexes for storing and retrieving information in a compressed format.
We consider how different states in latent neural spaces corresponding to different types of sensory input could be linked by synchronous activation.
arXiv Detail & Related papers (2023-03-01T10:05:40Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - What does it mean to represent? Mental representations as falsifiable
memory patterns [8.430851504111585]
We argue that causal and teleological approaches fail to provide a satisfactory account of representation.
We sketch an alternative according to which representations correspond to inferred latent structures in the world.
These structures are assumed to have certain properties objectively, which allows for planning, prediction, and detection of unexpected events.
arXiv Detail & Related papers (2022-03-06T12:52:42Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - NeuroCartography: Scalable Automatic Visual Summarization of Concepts in
Deep Neural Networks [18.62960153659548]
NeuroCartography is an interactive system that summarizes and visualizes concepts learned by neural networks.
It automatically discovers and groups neurons that detect the same concepts.
It describes how such neuron groups interact to form higher-level concepts and the subsequent predictions.
arXiv Detail & Related papers (2021-08-29T22:43:52Z) - On 1/n neural representation and robustness [13.491651740693705]
We show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks.
Our findings complement the existing theory relating wide neural networks to kernel methods.
arXiv Detail & Related papers (2020-12-08T20:34:49Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.