Machine Learning with Probabilistic Law Discovery: A Concise
Introduction
- URL: http://arxiv.org/abs/2212.11901v1
- Date: Thu, 22 Dec 2022 17:40:13 GMT
- Title: Machine Learning with Probabilistic Law Discovery: A Concise
Introduction
- Authors: Alexander Demin and Denis Ponomaryov
- Abstract summary: Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic Law Discovery (PLD) is a logic based Machine Learning method,
which implements a variant of probabilistic rule learning. In several aspects,
PLD is close to Decision Tree/Random Forest methods, but it differs
significantly in how relevant rules are defined. The learning procedure of PLD
solves the optimization problem related to the search for rules (called
probabilistic laws), which have a minimal length and relatively high
probability. At inference, ensembles of these rules are used for prediction.
Probabilistic laws are human-readable and PLD based models are transparent and
inherently interpretable. Applications of PLD include
classification/clusterization/regression tasks, as well as time series
analysis/anomaly detection and adaptive (robotic) control. In this paper, we
outline the main principles of PLD, highlight its benefits and limitations and
provide some application guidelines.
Related papers
- Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Probabilistic Truly Unordered Rule Sets [4.169915659794567]
We propose TURS, for Truly Unordered Rule Sets.
We exploit the probabilistic properties of our rule sets, with the intuition of only allowing rules to overlap if they have similar probabilistic outputs.
We benchmark against a wide range of rule-based methods and demonstrate that our method learns rule sets that have lower model complexity and highly competitive predictive performance.
arXiv Detail & Related papers (2024-01-18T12:03:19Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Logical Entity Representation in Knowledge-Graphs for Differentiable
Rule Learning [71.05093203007357]
We propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph.
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods.
arXiv Detail & Related papers (2023-05-22T05:59:22Z) - Truly Unordered Probabilistic Rule Sets for Multi-class Classification [0.0]
We propose TURS, for Truly Unordered Rule Sets.
We first formalise the problem of learning truly unordered rule sets.
We then develop a two-phase algorithm that learns rule sets by carefully growing rules.
arXiv Detail & Related papers (2022-06-17T14:34:35Z) - Efficient Learning of Interpretable Classification Rules [34.27987659227838]
This paper contributes an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for classification rules expressible in proposition logic.
In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability.
arXiv Detail & Related papers (2022-05-14T00:36:38Z) - RuleBert: Teaching Soft Rules to Pre-trained Language Models [21.69870624809201]
We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis.
We propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task.
Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training.
arXiv Detail & Related papers (2021-09-24T16:19:25Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.