SMT + ILP
- URL: http://arxiv.org/abs/2001.05208v1
- Date: Wed, 15 Jan 2020 10:09:21 GMT
- Title: SMT + ILP
- Authors: Vaishak Belle
- Abstract summary: We motivate a reconsideration of inductive declarative programming by leveraging satisfiability modulo theory technology.
In this position paper, we motivate a reconsideration of inductive declarative programming by leveraging satisfiability modulo theory technology.
- Score: 12.47276164048813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inductive logic programming (ILP) has been a deeply influential paradigm in
AI, enjoying decades of research on its theory and implementations. As a
natural descendent of the fields of logic programming and machine learning, it
admits the incorporation of background knowledge, which can be very useful in
domains where prior knowledge from experts is available and can lead to a more
data-efficient learning regime. Be that as it may, the limitation to Horn
clauses composed over Boolean variables is a very serious one. Many phenomena
occurring in the real-world are best characterized using continuous entities,
and more generally, mixtures of discrete and continuous entities. In this
position paper, we motivate a reconsideration of inductive declarative
programming by leveraging satisfiability modulo theory technology.
Related papers
- Differentiable Inductive Logic Programming for Fraud Detection [3.0846824529023382]
This work investigates the applicability of Differentiable Inductive Logic Programming (DILP) as an explainable AI approach to Fraud Detection.
While in processing it does not provide any significant advantage on rather more traditional methods such as Decision Trees, or more recent ones like Deep Symbolic Classification, it still gives comparable results.
We showcase its limitations and points to improve, as well as potential use cases where it can be much more useful compared to traditional methods.
arXiv Detail & Related papers (2024-10-29T10:43:06Z) - TransBox: EL++-closed Ontology Embedding [14.850996103983187]
We develop an effective EL++-closed embedding method that can handle many-to-one, one-to-many and many-to-many relations.
Our experiments demonstrate that TransBox achieves state-of-the-art performance across various real-world datasets for predicting complex axioms.
arXiv Detail & Related papers (2024-10-18T16:17:10Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Towards Invertible Semantic-Preserving Embeddings of Logical Formulae [1.0152838128195467]
Learning and optimising logic requirements and rules has always been an important problem in Artificial Intelligence.
Current methods are able to construct effective semantic-preserving embeddings via kernel methods, but the map they define is not invertible.
In this work we address this problem, learning how to invert such an embedding leveraging deep architectures based on the Graph Variational Autoencoder framework.
arXiv Detail & Related papers (2023-05-03T10:49:01Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Symbolic Logic meets Machine Learning: A Brief Survey in Infinite
Domains [12.47276164048813]
Tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence.
We report on results that challenge the view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.
arXiv Detail & Related papers (2020-06-15T15:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.