Randomly Weighted, Untrained Neural Tensor Networks Achieve Greater
Relational Expressiveness
- URL: http://arxiv.org/abs/2006.12392v2
- Date: Sat, 3 Oct 2020 06:14:09 GMT
- Title: Randomly Weighted, Untrained Neural Tensor Networks Achieve Greater
Relational Expressiveness
- Authors: Jinyung Hong, Theodore P. Pavlic
- Abstract summary: We propose Randomly Weighted Networks (RWTNs), which incorporate randomly drawn untrained tensors into a network with a trained decoder network.
We show that RWTNs meet or surpass the performance of traditionally trained LTNs for Image Interpretation (SIITNs)
We demonstrate that RWTNs can achieve similar performance as LTNs for object classification while using fewer parameters for learning.
- Score: 3.5408022972081694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Tensor Networks (NTNs), which are structured to encode the degree of
relationship among pairs of entities, are used in Logic Tensor Networks (LTNs)
to facilitate Statistical Relational Learning (SRL) in first-order logic. In
this paper, we propose Randomly Weighted Tensor Networks (RWTNs), which
incorporate randomly drawn, untrained tensors into an NTN encoder network with
a trained decoder network. We show that RWTNs meet or surpass the performance
of traditionally trained LTNs for Semantic Image Interpretation (SII) tasks
that have been used as a representative example of how LTNs utilize reasoning
over first-order logic to exceed the performance of solely data-driven methods.
We demonstrate that RWTNs outperform LTNs for the detection of the relevant
part-of relations between objects, and we show that RWTNs can achieve similar
performance as LTNs for object classification while using fewer parameters for
learning. Furthermore, we demonstrate that because the randomized weights do
not depend on the data, several decoder networks can share a single NTN, giving
RWTNs a unique economy of spatial scale for simultaneous classification tasks.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Deep Neural Networks via Complex Network Theory: a Perspective [3.1023851130450684]
Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures.
In this work, we extend the existing CNT metrics with measures that sample from the DNNs' training distribution, shifting from a purely topological analysis to one that connects with the interpretability of deep learning.
arXiv Detail & Related papers (2024-04-17T08:42:42Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Interpretable Neural Networks with Random Constructive Algorithm [3.1200894334384954]
This paper introduces an Interpretable Neural Network (INN) incorporating spatial information to tackle the opaque parameterization process of random weighted neural networks.
It devises a geometric relationship strategy using a pool of candidate nodes and established relationships to select node parameters conducive to network convergence.
arXiv Detail & Related papers (2023-07-01T01:07:20Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Representing Prior Knowledge Using Randomly, Weighted Feature Networks
for Visual Relationship Detection [2.28438857884398]
Single-hidden-layer Randomly Weighted Feature Network (RWFN) introduced by Hong and Pavlic.
In this paper, we use RWFNs to perform Visual Relationship Detection (VRD) tasks.
arXiv Detail & Related papers (2021-11-20T21:56:45Z) - An Insect-Inspired Randomly, Weighted Neural Network with Random Fourier
Features For Neuro-Symbolic Relational Learning [2.28438857884398]
We propose a Randomly Weighted Feature Network that incorporates randomly drawn, untrained weights in an encoder that uses an adapted linear model as a decoder.
Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model.
We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks.
arXiv Detail & Related papers (2021-09-11T22:45:08Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.