Improving Neural-based Classification with Logical Background Knowledge
- URL: http://arxiv.org/abs/2402.13019v1
- Date: Tue, 20 Feb 2024 14:01:26 GMT
- Title: Improving Neural-based Classification with Logical Background Knowledge
- Authors: Arthur Ledaguenel, C\'eline Hudelot, Mostepha Khouadjia
- Abstract summary: We propose a new formalism for supervised multi-label classification with propositional background knowledge.
We introduce a new neurosymbolic technique called semantic conditioning at inference.
We discuss its theoritical and practical advantages over two other popular neurosymbolic techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neurosymbolic AI is a growing field of research aiming to combine neural
networks learning capabilities with the reasoning abilities of symbolic
systems. This hybridization can take many shapes. In this paper, we propose a
new formalism for supervised multi-label classification with propositional
background knowledge. We introduce a new neurosymbolic technique called
semantic conditioning at inference, which only constrains the system during
inference while leaving the training unaffected. We discuss its theoritical and
practical advantages over two other popular neurosymbolic techniques: semantic
conditioning and semantic regularization. We develop a new multi-scale
methodology to evaluate how the benefits of a neurosymbolic technique evolve
with the scale of the network. We then evaluate experimentally and compare the
benefits of all three techniques across model scales on several datasets. Our
results demonstrate that semantic conditioning at inference can be used to
build more accurate neural-based systems with fewer resources while
guaranteeing the semantic consistency of outputs.
Related papers
- Complexity of Probabilistic Reasoning for Neurosymbolic Classification Techniques [6.775534755081169]
We introduce a formalism for informed supervised classification and techniques.
We then build upon this formalism to define three abstract neurosymbolic techniques based on probabilistic reasoning.
arXiv Detail & Related papers (2024-04-12T11:31:37Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - An Interpretable Neuron Embedding for Static Knowledge Distillation [7.644253344815002]
We propose a new interpretable neural network method, by embedding neurons into the semantic space.
The proposed semantic vector externalizes the latent knowledge to static knowledge, which is easy to exploit.
Empirical experiments of visualization show that semantic vectors describe neuron activation semantics well.
arXiv Detail & Related papers (2022-11-14T03:26:10Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Can you tell? SSNet -- a Sagittal Stratum-inspired Neural Network
Framework for Sentiment Analysis [1.0312968200748118]
We propose a neural network architecture that combines predictions of different models on the same text to construct robust, accurate and computationally efficient classifiers for sentiment analysis.
Among them, we propose a systematic new approach to combining multiple predictions based on a dedicated neural network and develop mathematical analysis of it along with state-of-the-art experimental results.
arXiv Detail & Related papers (2020-06-23T12:55:02Z) - Equilibrium Propagation for Complete Directed Neural Networks [0.0]
Most successful learning algorithm for artificial neural networks, backpropagation, is considered biologically implausible.
We contribute to the topic of biologically plausible neuronal learning by building upon and extending the equilibrium propagation learning framework.
arXiv Detail & Related papers (2020-06-15T22:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.