Scallop: A Language for Neurosymbolic Programming
- URL: http://arxiv.org/abs/2304.04812v1
- Date: Mon, 10 Apr 2023 18:46:53 GMT
- Title: Scallop: A Language for Neurosymbolic Programming
- Authors: Ziyang Li, Jiani Huang, Mayur Naik
- Abstract summary: Scallop is a language that combines the benefits of deep learning and logical reasoning.
It is capable of expressing algorithmic reasoning in diverse and challenging AI tasks.
It provides a succinct interface for machine learning programmers to integrate logical domain knowledge.
- Score: 14.148819428748597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Scallop, a language which combines the benefits of deep learning
and logical reasoning. Scallop enables users to write a wide range of
neurosymbolic applications and train them in a data- and compute-efficient
manner. It achieves these goals through three key features: 1) a flexible
symbolic representation that is based on the relational data model; 2) a
declarative logic programming language that is based on Datalog and supports
recursion, aggregation, and negation; and 3) a framework for automatic and
efficient differentiable reasoning that is based on the theory of provenance
semirings. We evaluate Scallop on a suite of eight neurosymbolic applications
from the literature. Our evaluation demonstrates that Scallop is capable of
expressing algorithmic reasoning in diverse and challenging AI tasks, provides
a succinct interface for machine learning programmers to integrate logical
domain knowledge, and yields solutions that are comparable or superior to
state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions
outperform these models in aspects such as runtime and data efficiency,
interpretability, and generalizability.
Related papers
- LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Meta-Reasoning: Semantics-Symbol Deconstruction for Large Language Models [34.22393697176282]
We propose the Meta-Reasoning to broaden symbolic methods' applicability and adaptability in the real world.
This method empowers LLMs to deconstruct reasoning-independent semantic information into generic symbolic representations.
We conduct extensive experiments on more than ten datasets encompassing conventional reasoning tasks like arithmetic, symbolic, and logical reasoning, and the more complex interactive reasoning tasks like theory-of-mind reasoning.
arXiv Detail & Related papers (2023-06-30T17:38:10Z) - ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational
Finance Question Answering [70.6359636116848]
We propose a new large-scale dataset, ConvFinQA, to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations.
arXiv Detail & Related papers (2022-10-07T23:48:50Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - VAEL: Bridging Variational Autoencoders and Probabilistic Logic
Programming [3.759936323189418]
We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming.
arXiv Detail & Related papers (2022-02-07T10:16:53Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Learning Generalized Relational Heuristic Networks for Model-Agnostic
Planning [29.714818991696088]
This paper develops a new approach for learning generalizeds in the absence of symbolic action models.
It uses an abstract state representation to facilitate data efficient, generalizable learning.
arXiv Detail & Related papers (2020-07-10T06:08:28Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.