Differentiable Inductive Logic Programming for Fraud Detection
- URL: http://arxiv.org/abs/2410.21928v1
- Date: Tue, 29 Oct 2024 10:43:06 GMT
- Title: Differentiable Inductive Logic Programming for Fraud Detection
- Authors: Boris Wolfson, Erman Acar,
- Abstract summary: This work investigates the applicability of Differentiable Inductive Logic Programming (DILP) as an explainable AI approach to Fraud Detection.
While in processing it does not provide any significant advantage on rather more traditional methods such as Decision Trees, or more recent ones like Deep Symbolic Classification, it still gives comparable results.
We showcase its limitations and points to improve, as well as potential use cases where it can be much more useful compared to traditional methods.
- Score: 3.0846824529023382
- License:
- Abstract: Current trends in Machine Learning prefer explainability even when it comes at the cost of performance. Therefore, explainable AI methods are particularly important in the field of Fraud Detection. This work investigates the applicability of Differentiable Inductive Logic Programming (DILP) as an explainable AI approach to Fraud Detection. Although the scalability of DILP is a well-known issue, we show that with some data curation such as cleaning and adjusting the tabular and numerical data to the expected format of background facts statements, it becomes much more applicable. While in processing it does not provide any significant advantage on rather more traditional methods such as Decision Trees, or more recent ones like Deep Symbolic Classification, it still gives comparable results. We showcase its limitations and points to improve, as well as potential use cases where it can be much more useful compared to traditional methods, such as recursive rule learning.
Related papers
- Learning Rules Explaining Interactive Theorem Proving Tactic Prediction [5.229806149125529]
We represent the problem as an Inductive Logic Programming (ILP) task.
Using the ILP representation we enriched the feature space by encoding additional, computationally expensive properties.
We use this enriched feature space to learn rules explaining when a tactic is applicable to a given proof state.
arXiv Detail & Related papers (2024-11-02T09:18:33Z) - Differentiable Logic Programming for Distant Supervision [4.820391833117535]
We introduce a new method for integrating neural networks with logic programming in Neural-Symbolic AI (NeSy)
Unlike prior methods, our approach does not depend on symbolic solvers for reasoning about missing labels.
This method facilitates more efficient learning under distant supervision.
arXiv Detail & Related papers (2024-08-22T17:55:52Z) - Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning [53.241569810013836]
We propose a new framework based on large language models (LLMs) and decision Tree reasoning (OCTree)
Our key idea is to leverage LLMs' reasoning capabilities to find good feature generation rules without manually specifying the search space.
Our empirical results demonstrate that this simple framework consistently enhances the performance of various prediction models.
arXiv Detail & Related papers (2024-06-12T08:31:34Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Rethinking Negative Pairs in Code Search [56.23857828689406]
We propose a simple yet effective Soft-InfoNCE loss that inserts weight terms into InfoNCE.
We analyze the effects of Soft-InfoNCE on controlling the distribution of learnt code representations and on deducing a more precise mutual information estimation.
arXiv Detail & Related papers (2023-10-12T06:32:42Z) - Interpretable Anomaly Detection via Discrete Optimization [1.7150329136228712]
We propose a framework for learning inherently interpretable anomaly detectors from sequential data.
We show that this problem is computationally hard and develop two learning algorithms based on constraint optimization.
Using a prototype implementation, we demonstrate that our approach shows promising results in terms of accuracy and F1 score.
arXiv Detail & Related papers (2023-03-24T16:19:15Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Overcoming the curse of dimensionality with Laplacian regularization in
semi-supervised learning [80.20302993614594]
We provide a statistical analysis to overcome drawbacks of Laplacian regularization.
We unveil a large body of spectral filtering methods that exhibit desirable behaviors.
We provide realistic computational guidelines in order to make our method usable with large amounts of data.
arXiv Detail & Related papers (2020-09-09T14:28:54Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z) - SMT + ILP [12.47276164048813]
We motivate a reconsideration of inductive declarative programming by leveraging satisfiability modulo theory technology.
In this position paper, we motivate a reconsideration of inductive declarative programming by leveraging satisfiability modulo theory technology.
arXiv Detail & Related papers (2020-01-15T10:09:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.