Feature Extraction Functions for Neural Logic Rule Learning
- URL: http://arxiv.org/abs/2008.06326v4
- Date: Sun, 11 Apr 2021 06:15:17 GMT
- Title: Feature Extraction Functions for Neural Logic Rule Learning
- Authors: Shashank Gupta, Antonio Robles-Kelly and Mohamed Reda Bouadjenek
- Abstract summary: We propose functions for integrating human knowledge abstracted as logic rules into the predictive behavior of a neural network.
Unlike other existing neural logic approaches, the programmatic nature of these functions implies that they do not require any kind of special mathematical encoding.
- Score: 4.181432858358386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining symbolic human knowledge with neural networks provides a rule-based
ante-hoc explanation of the output. In this paper, we propose feature
extracting functions for integrating human knowledge abstracted as logic rules
into the predictive behavior of a neural network. These functions are embodied
as programming functions, which represent the applicable domain knowledge as a
set of logical instructions and provide a modified distribution of independent
features on input data. Unlike other existing neural logic approaches, the
programmatic nature of these functions implies that they do not require any
kind of special mathematical encoding, which makes our method very general and
flexible in nature. We illustrate the performance of our approach for sentiment
classification and compare our results to those obtained using two baselines.
Related papers
- Compositional learning of functions in humans and machines [23.583544271543033]
We develop a function learning paradigm to explore the capacity of humans and neural network models in learning and reasoning with compositional functions.
Our findings indicate that humans can make zero-shot generalizations on novel visual function compositions across interaction conditions.
A comparison with a neural network model on the same task reveals that, through the meta-learning for compositionality (MLC) approach, a standard sequence-to-sequence Transformer can mimic human generalization patterns in composing functions.
arXiv Detail & Related papers (2024-03-18T19:22:53Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neural Feature Learning in Function Space [5.807950618412389]
We present a novel framework for learning system design with neural feature extractors.
We introduce the feature geometry, which unifies statistical dependence and feature representations in a function space equipped with inner products.
We propose a nesting technique, which provides systematic algorithm designs for learning the optimal features from data samples.
arXiv Detail & Related papers (2023-09-18T20:39:12Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Points of non-linearity of functions generated by random neural networks [0.0]
We consider functions from the real numbers to the real numbers, output by a neural network with 1 hidden activation layer, arbitrary width, and ReLU activation function.
We compute the expected distribution of the points of non-linearity.
arXiv Detail & Related papers (2023-04-19T17:40:19Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - A Theoretical Analysis on Feature Learning in Neural Networks: Emergence
from Inputs and Advantage over Fixed Features [18.321479102352875]
An important characteristic of neural networks is their ability to learn representations of the input data with effective features for prediction.
We consider learning problems motivated by practical data, where the labels are determined by a set of class relevant patterns and the inputs are generated from these.
We prove that neural networks trained by gradient descent can succeed on these problems.
arXiv Detail & Related papers (2022-06-03T17:49:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.