Discovering Useful Compact Sets of Sequential Rules in a Long Sequence
- URL: http://arxiv.org/abs/2109.07519v1
- Date: Wed, 15 Sep 2021 18:25:18 GMT
- Title: Discovering Useful Compact Sets of Sequential Rules in a Long Sequence
- Authors: Erwan Bourrand and Luis Gal\'arraga and Esther Galbrun and Elisa
Fromont and Alexandre Termier
- Abstract summary: COSSU is an algorithm to mine small and meaningful sets of sequential rules.
We show that COSSU can successfully retrieve relevant sets of closed sequential rules from a long sequence.
- Score: 57.684967309375274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are interested in understanding the underlying generation process for long
sequences of symbolic events. To do so, we propose COSSU, an algorithm to mine
small and meaningful sets of sequential rules. The rules are selected using an
MDL-inspired criterion that favors compactness and relies on a novel rule-based
encoding scheme for sequences. Our evaluation shows that COSSU can successfully
retrieve relevant sets of closed sequential rules from a long sequence. Such
rules constitute an interpretable model that exhibits competitive accuracy for
the tasks of next-element prediction and classification.
Related papers
- Neuro-Symbolic Rule Lists [31.085257698392354]
NeuRules is an end-to-end trainable model that unifies discretization, rule learning, and rule order into a single framework.
We show that NeuRules consistently outperforms neuro-symbolic methods, effectively learning simple and complex rules, as well as their order, across a wide range of datasets.
arXiv Detail & Related papers (2024-11-10T11:10:36Z) - Symbolic Working Memory Enhances Language Models for Complex Rule Application [87.34281749422756]
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning.
We propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application.
Our framework iteratively performs symbolic rule grounding and LLM-based rule implementation.
arXiv Detail & Related papers (2024-08-24T19:11:54Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Reinforcement Logic Rule Learning for Temporal Point Processes [17.535382791003176]
We propose a framework that can incrementally expand the explanatory temporal logic rule set to explain the occurrence of temporal events.
The proposed algorithm alternates between a master problem, where the current rule set weights are updated, and a subproblem, where a new rule is searched and included to best increase the likelihood.
We evaluate our methods on both synthetic and real healthcare datasets, obtaining promising results.
arXiv Detail & Related papers (2023-08-11T12:05:32Z) - Towards Correlated Sequential Rules [4.743965372344134]
High-utility sequential rule mining (HUSRM) is designed to explore the confidence or probability of predicting the occurrence of consequence sequential patterns.
The existing algorithm, known as HUSRM, is limited to extracting all eligible rules while neglecting the correlation between the generated sequential rules.
We propose a novel algorithm called correlated high-utility sequential rule miner (CoUSR) to integrate the concept of correlation into HUSRM.
arXiv Detail & Related papers (2022-10-27T17:27:23Z) - Towards Target Sequential Rules [52.4562332499155]
We propose an efficient algorithm, called targeted sequential rule mining (TaSRM)
It is shown that the novel algorithm TaSRM and its variants can achieve better experimental performance compared to the existing baseline algorithm.
arXiv Detail & Related papers (2022-06-09T18:59:54Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - US-Rule: Discovering Utility-driven Sequential Rules [52.68017415747925]
We propose a faster algorithm, called US-Rule, to efficiently mine high-utility sequential rules.
Four tighter upper bounds (LEEU, REEU, LERSU, RERSU) and their corresponding pruning strategies are proposed.
US-Rule can achieve better performance in terms of execution time, memory consumption and scalability.
arXiv Detail & Related papers (2021-11-29T23:38:28Z) - Interpretable and Fair Boolean Rule Sets via Column Generation [18.08486863429421]
An integer program is formulated to optimally trade classification accuracy for rule simplicity.
We consider the fairness setting and extend the formulation to include explicit constraints on two different measures of classification parity.
Compared to other fair and interpretable classifiers, our method is able to find rule sets that meet stricter notions of fairness with a modest trade-off in accuracy.
arXiv Detail & Related papers (2021-11-16T13:40:28Z) - Better Short than Greedy: Interpretable Models through Optimal Rule
Boosting [10.938624307941197]
Rule ensembles are designed to provide a useful trade-off between predictive accuracy and model interpretability.
We present a novel approach aiming to fit rule ensembles of maximal predictive power for a given ensemble size.
arXiv Detail & Related papers (2021-01-21T01:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.