Representing Prior Knowledge Using Randomly, Weighted Feature Networks
for Visual Relationship Detection
- URL: http://arxiv.org/abs/2111.10686v1
- Date: Sat, 20 Nov 2021 21:56:45 GMT
- Title: Representing Prior Knowledge Using Randomly, Weighted Feature Networks
for Visual Relationship Detection
- Authors: Jinyung Hong, Theodore P. Pavlic
- Abstract summary: Single-hidden-layer Randomly Weighted Feature Network (RWFN) introduced by Hong and Pavlic.
In this paper, we use RWFNs to perform Visual Relationship Detection (VRD) tasks.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The single-hidden-layer Randomly Weighted Feature Network (RWFN) introduced
by Hong and Pavlic (2021) was developed as an alternative to neural tensor
network approaches for relational learning tasks. Its relatively small
footprint combined with the use of two randomized input projections -- an
insect-brain-inspired input representation and random Fourier features -- allow
it to achieve rich expressiveness for relational learning with relatively low
training cost. In particular, when Hong and Pavlic compared RWFN to Logic
Tensor Networks (LTNs) for Semantic Image Interpretation (SII) tasks to extract
structured semantic descriptions from images, they showed that the RWFN
integration of the two hidden, randomized representations better captures
relationships among inputs with a faster training process even though it uses
far fewer learnable parameters. In this paper, we use RWFNs to perform Visual
Relationship Detection (VRD) tasks, which are more challenging SII tasks. A
zero-shot learning approach is used with RWFN that can exploit similarities
with other seen relationships and background knowledge -- expressed with
logical constraints between subjects, relations, and objects -- to achieve the
ability to predict triples that do not appear in the training set. The
experiments on the Visual Relationship Dataset to compare the performance
between RWFNs and LTNs, one of the leading Statistical Relational Learning
frameworks, show that RWFNs outperform LTNs for the predicate-detection task
while using fewer number of adaptable parameters (1:56 ratio). Furthermore,
background knowledge represented by RWFNs can be used to alleviate the
incompleteness of training sets even though the space complexity of RWFNs is
much smaller than LTNs (1:27 ratio).
Related papers
- Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations [45.974930902038494]
We propose a novel neuro-symbolic architecture, Neural Reasoning Networks (NRN), that is scalable and generates logically textual explanations for its predictions.
A training algorithm (R-NRN) learns the weights of the network as usual using descent optimization with backprop, but also learns the network structure itself using a bandit-based optimization.
R-NRN explanations are shorter than the compared approaches while producing more accurate feature importance scores.
arXiv Detail & Related papers (2024-10-10T14:27:12Z) - Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks [4.499833362998488]
Implicit neural representations (INRs) use neural networks to provide continuous and resolution-independent representations of complex signals.
The proposed FKAN utilizes learnable activation functions modeled as Fourier series in the first layer to effectively control and learn the task-specific frequency components.
Experimental results show that our proposed FKAN model outperforms three state-of-the-art baseline schemes.
arXiv Detail & Related papers (2024-09-14T05:53:33Z) - LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - An Insect-Inspired Randomly, Weighted Neural Network with Random Fourier
Features For Neuro-Symbolic Relational Learning [2.28438857884398]
We propose a Randomly Weighted Feature Network that incorporates randomly drawn, untrained weights in an encoder that uses an adapted linear model as a decoder.
Because of this special representation, RWFNs can effectively learn the degree of relationship among inputs by training only a linear decoder model.
We demonstrate that compared to LTNs, RWFNs can achieve better or similar performance for both object classification and detection of the part-of relations between objects in SII tasks.
arXiv Detail & Related papers (2021-09-11T22:45:08Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Randomly Weighted, Untrained Neural Tensor Networks Achieve Greater
Relational Expressiveness [3.5408022972081694]
We propose Randomly Weighted Networks (RWTNs), which incorporate randomly drawn untrained tensors into a network with a trained decoder network.
We show that RWTNs meet or surpass the performance of traditionally trained LTNs for Image Interpretation (SIITNs)
We demonstrate that RWTNs can achieve similar performance as LTNs for object classification while using fewer parameters for learning.
arXiv Detail & Related papers (2020-06-01T19:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.