An Insect-Inspired Randomly, Weighted Neural Network with Random Fourier
Features For Neuro-Symbolic Relational Learning
- URL: http://arxiv.org/abs/2109.06663v1
- Date: Sat, 11 Sep 2021 22:45:08 GMT
- Title: An Insect-Inspired Randomly, Weighted Neural Network with Random Fourier
Features For Neuro-Symbolic Relational Learning
- Authors: Jinyung Hong, Theodore P. Pavlic
- Abstract summary: We propose a Randomly Weighted Feature Network that incorporates randomly drawn, untrained weights in an encoder that uses an adapted linear model as a decoder.
Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model.
We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Insects, such as fruit flies and honey bees, can solve simple associative
learning tasks and learn abstract concepts such as "sameness" and "difference",
which is viewed as a higher-order cognitive function and typically thought to
depend on top-down neocortical processing. Empirical research with fruit flies
strongly supports that a randomized representational architecture is used in
olfactory processing in insect brains. Based on these results, we propose a
Randomly Weighted Feature Network (RWFN) that incorporates randomly drawn,
untrained weights in an encoder that uses an adapted linear model as a decoder.
The randomized projections between input neurons and higher-order processing
centers in the input brain is mimicked in RWFN by a single-hidden-layer neural
network that specially structures latent representations in the hidden layer
using random Fourier features that better represent complex relationships
between inputs using kernel approximation. Because of this special
representation, RWFNs can effectively learn the degree of relationship among
inputs by training only a linear decoder model. We compare the performance of
RWFNs to LTNs for Semantic Image Interpretation (SII) tasks that have been used
as a representative example of how LTNs utilize reasoning over first-order
logic to surpass the performance of solely data-driven methods. We demonstrate
that compared to LTNs, RWFNs can achieve better or similar performance for both
object classification and detection of the part-of relations between objects in
SII tasks while using much far fewer learnable parameters (1:62 ratio) and a
faster learning process (1:2 ratio of running speed). Furthermore, we show that
because the randomized weights do not depend on the data, several decoders can
share a single randomized encoder, giving RWFNs a unique economy of spatial
scale for simultaneous classification tasks.
Related papers
- Expressivity of Neural Networks with Random Weights and Learned Biases [44.02417750529102]
Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters.
We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only.
Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights.
arXiv Detail & Related papers (2024-07-01T04:25:49Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Representing Prior Knowledge Using Randomly, Weighted Feature Networks
for Visual Relationship Detection [2.28438857884398]
Single-hidden-layer Randomly Weighted Feature Network (RWFN) introduced by Hong and Pavlic.
In this paper, we use RWFNs to perform Visual Relationship Detection (VRD) tasks.
arXiv Detail & Related papers (2021-11-20T21:56:45Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Classifying high-dimensional Gaussian mixtures: Where kernel methods
fail and neural networks succeed [27.38015169185521]
We show theoretically that two-layer neural networks (2LNN) with only a few hidden neurons can beat the performance of kernel learning.
We show how over-parametrising the neural network leads to faster convergence, but does not improve its final performance.
arXiv Detail & Related papers (2021-02-23T15:10:15Z) - Randomly Weighted, Untrained Neural Tensor Networks Achieve Greater
Relational Expressiveness [3.5408022972081694]
We propose Randomly Weighted Networks (RWTNs), which incorporate randomly drawn untrained tensors into a network with a trained decoder network.
We show that RWTNs meet or surpass the performance of traditionally trained LTNs for Image Interpretation (SIITNs)
We demonstrate that RWTNs can achieve similar performance as LTNs for object classification while using fewer parameters for learning.
arXiv Detail & Related papers (2020-06-01T19:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.