A Framework for Learning Invariant Physical Relations in Multimodal
Sensory Processing
- URL: http://arxiv.org/abs/2006.16607v1
- Date: Tue, 30 Jun 2020 08:42:48 GMT
- Title: A Framework for Learning Invariant Physical Relations in Multimodal
Sensory Processing
- Authors: Du Xiaorui, Yavuzhan Erdem, Immanuel Schweizer, Cristian Axenie
- Abstract summary: We design a novel neural network architecture capable of learning, in an unsupervised manner, relations among sensory cues.
We describe the core system functionality when learning arbitrary non-linear relations in low-dimensional sensory data.
We demonstrate this through a real-world learning problem, where, from standard RGB camera frames, the network learns the relations between physical quantities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Perceptual learning enables humans to recognize and represent stimuli
invariant to various transformations and build a consistent representation of
the self and physical world. Such representations preserve the invariant
physical relations among the multiple perceived sensory cues. This work is an
attempt to exploit these principles in an engineered system. We design a novel
neural network architecture capable of learning, in an unsupervised manner,
relations among multiple sensory cues. The system combines computational
principles, such as competition, cooperation, and correlation, in a neurally
plausible computational substrate. It achieves that through a parallel and
distributed processing architecture in which the relations among the multiple
sensory quantities are extracted from time-sequenced data. We describe the core
system functionality when learning arbitrary non-linear relations in
low-dimensional sensory data. Here, an initial benefit rises from the fact that
such a network can be engineered in a relatively straightforward way without
prior information about the sensors and their interactions. Moreover,
alleviating the need for tedious modelling and parametrization, the network
converges to a consistent description of any arbitrary high-dimensional
multisensory setup. We demonstrate this through a real-world learning problem,
where, from standard RGB camera frames, the network learns the relations
between physical quantities such as light intensity, spatial gradient, and
optical flow, describing a visual scene. Overall, the benefits of such a
framework lie in the capability to learn non-linear pairwise relations among
sensory streams in an architecture that is stable under noise and missing
sensor input.
Related papers
- On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Discrete-Valued Neural Communication [85.3675647398994]
We show that restricting the transmitted information among components to discrete representations is a beneficial bottleneck.
Even though individuals have different understandings of what a "cat" is based on their specific experiences, the shared discrete token makes it possible for communication among individuals to be unimpeded by individual differences in internal representation.
We extend the quantization mechanism from the Vector-Quantized Variational Autoencoder to multi-headed discretization with shared codebooks and use it for discrete-valued neural communication.
arXiv Detail & Related papers (2021-07-06T03:09:25Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Operationally meaningful representations of physical systems in neural
networks [4.192302677744796]
We present a neural network architecture based on the notion that agents dealing with different aspects of a physical system should be able to communicate relevant information as efficiently as possible to one another.
This produces representations that separate different parameters which are useful for making statements about the physical system in different experimental settings.
arXiv Detail & Related papers (2020-01-02T19:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.