PROTOtypical Logic Tensor Networks (PROTO-LTN) for Zero Shot Learning
- URL: http://arxiv.org/abs/2207.00433v1
- Date: Sun, 26 Jun 2022 18:34:07 GMT
- Title: PROTOtypical Logic Tensor Networks (PROTO-LTN) for Zero Shot Learning
- Authors: Simone Martone, Francesco Manigrasso, Lamberti Fabrizio, Lia Morra
- Abstract summary: Logic Networks (LTNs) are neuro-symbolic systems based on a differentiable, first-order logic grounded into a deep neural network.
We focus here on the subsumption or textttisOfClass predicate, which is fundamental to encode most semantic image interpretation tasks.
We propose a common textttisOfClass predicate, whose level of truth is a function of the distance between an object embedding and the corresponding class prototype.
- Score: 2.236663830879273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic image interpretation can vastly benefit from approaches that combine
sub-symbolic distributed representation learning with the capability to reason
at a higher level of abstraction. Logic Tensor Networks (LTNs) are a class of
neuro-symbolic systems based on a differentiable, first-order logic grounded
into a deep neural network. LTNs replace the classical concept of training set
with a knowledge base of fuzzy logical axioms. By defining a set of
differentiable operators to approximate the role of connectives, predicates,
functions and quantifiers, a loss function is automatically specified so that
LTNs can learn to satisfy the knowledge base. We focus here on the subsumption
or \texttt{isOfClass} predicate, which is fundamental to encode most semantic
image interpretation tasks. Unlike conventional LTNs, which rely on a separate
predicate for each class (e.g., dog, cat), each with its own set of learnable
weights, we propose a common \texttt{isOfClass} predicate, whose level of truth
is a function of the distance between an object embedding and the corresponding
class prototype. The PROTOtypical Logic Tensor Networks (PROTO-LTN) extend the
current formulation by grounding abstract concepts as parametrized class
prototypes in a high-dimensional embedding space, while reducing the number of
parameters required to ground the knowledge base. We show how this architecture
can be effectively trained in the few and zero-shot learning scenarios.
Experiments on Generalized Zero Shot Learning benchmarks validate the proposed
implementation as a competitive alternative to traditional embedding-based
approaches. The proposed formulation opens up new opportunities in zero shot
learning settings, as the LTN formalism allows to integrate background
knowledge in the form of logical axioms to compensate for the lack of labelled
examples.
Related papers
- Semantic Objective Functions: A distribution-aware method for adding logical constraints in deep learning [4.854297874710511]
Constrained Learning and Knowledge Distillation techniques have shown promising results.
We propose a loss-based method that embeds knowledge-enforces logical constraints into a machine learning model.
We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints.
arXiv Detail & Related papers (2024-05-03T19:21:47Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Fuzzy Logic Visual Network (FLVN): A neuro-symbolic approach for visual
features matching [6.128849673587451]
We present the Fuzzy Logic Visual Network (FLVN) that formulates the task of learning a visual-semantic embedding space within a neuro-symbolic LTN framework.
FLVN incorporates prior knowledge in the form of class hierarchies (classes and macro-classes) along with robust high-level inductive biases.
It achieves competitive performance to recent ZSL methods with less computational overhead.
arXiv Detail & Related papers (2023-07-29T16:21:40Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - DeepPSL: End-to-end perception and reasoning with applications to zero
shot learning [1.1124588036301817]
We produce an end-to-end trainable system that integrates reasoning and perception.
DeepPSL is a variant of Probabilistic Soft Logic (PSL)
We evaluate DeepPSL on a zero shot learning problem in image classification.
arXiv Detail & Related papers (2021-09-28T12:30:33Z) - Faster-LTN: a neuro-symbolic, end-to-end object detection architecture [6.262658726461965]
We propose Faster-LTN, an object detector composed of a convolutional backbone and an LTN.
This architecture is trained by optimizing a grounded theory which combines labelled examples with prior knowledge.
Experimental comparisons show competitive performance with respect to the traditional Faster R-CNN architecture.
arXiv Detail & Related papers (2021-07-05T09:09:20Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.