Neuro-Symbolic Temporal Point Processes
- URL: http://arxiv.org/abs/2406.03914v1
- Date: Thu, 6 Jun 2024 09:52:56 GMT
- Title: Neuro-Symbolic Temporal Point Processes
- Authors: Yang Yang, Chao Yang, Boyang Li, Yinghao Fu, Shuang Li,
- Abstract summary: We introduce a neural-symbolic rule induction framework within the temporal point process model.
The negative log-likelihood is the loss that guides the learning, where the explanatory logic rules and their weights are learned end-to-end.
Our approach showcases notable efficiency and accuracy across synthetic and real datasets.
- Score: 13.72758658973969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our goal is to $\textit{efficiently}$ discover a compact set of temporal logic rules to explain irregular events of interest. We introduce a neural-symbolic rule induction framework within the temporal point process model. The negative log-likelihood is the loss that guides the learning, where the explanatory logic rules and their weights are learned end-to-end in a $\textit{differentiable}$ way. Specifically, predicates and logic rules are represented as $\textit{vector embeddings}$, where the predicate embeddings are fixed and the rule embeddings are trained via gradient descent to obtain the most appropriate compositional representations of the predicate embeddings. To make the rule learning process more efficient and flexible, we adopt a $\textit{sequential covering algorithm}$, which progressively adds rules to the model and removes the event sequences that have been explained until all event sequences have been covered. All the found rules will be fed back to the models for a final rule embedding and weight refinement. Our approach showcases notable efficiency and accuracy across synthetic and real datasets, surpassing state-of-the-art baselines by a wide margin in terms of efficiency.
Related papers
- Symbolic Working Memory Enhances Language Models for Complex Rule Application [87.34281749422756]
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning.
We propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application.
Our framework iteratively performs symbolic rule grounding and LLM-based rule implementation.
arXiv Detail & Related papers (2024-08-24T19:11:54Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Reinforcement Logic Rule Learning for Temporal Point Processes [17.535382791003176]
We propose a framework that can incrementally expand the explanatory temporal logic rule set to explain the occurrence of temporal events.
The proposed algorithm alternates between a master problem, where the current rule set weights are updated, and a subproblem, where a new rule is searched and included to best increase the likelihood.
We evaluate our methods on both synthetic and real healthcare datasets, obtaining promising results.
arXiv Detail & Related papers (2023-08-11T12:05:32Z) - Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning [71.05093203007357]
We propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph.
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods.
arXiv Detail & Related papers (2023-05-22T05:59:22Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - Differentiable Rule Induction with Learned Relational Features [9.193818627108572]
Rule Network (RRN) is a neural architecture that learns predicates that represent a linear relationship among attributes along with the rules that use them.
On benchmark tasks we show that these predicates are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state of the art rule induction algorithms.
arXiv Detail & Related papers (2022-01-17T16:46:50Z) - LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking [62.634516517844496]
We propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules with the performance of neural learning.
Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches.
arXiv Detail & Related papers (2021-06-17T20:22:45Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Increasing the Inference and Learning Speed of Tsetlin Machines with
Clause Indexing [9.440900386313215]
The Tsetlin Machine (TM) is a machine learning algorithm founded on the classical Tsetlin Automaton (TA) and game theory.
We report up to 15 times faster classification and three times faster learning on MNIST and Fashion-MNIST image classification, and IMDb sentiment analysis.
arXiv Detail & Related papers (2020-04-07T08:16:07Z) - Towards Learning Instantiated Logical Rules from Knowledge Graphs [20.251630903853016]
We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
arXiv Detail & Related papers (2020-03-13T00:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.