ESC-Rules: Explainable, Semantically Constrained Rule Sets
- URL: http://arxiv.org/abs/2208.12523v1
- Date: Fri, 26 Aug 2022 09:29:30 GMT
- Title: ESC-Rules: Explainable, Semantically Constrained Rule Sets
- Authors: Martin Glauer, Robert West, Susan Michie, Janna Hastings
- Abstract summary: We describe a novel approach to explainable prediction of a continuous variable based on learning fuzzy weighted rules.
Our model trains a set of weighted rules to maximise prediction accuracy and minimise an ontology-based'semantic loss' function.
This system fuses quantitative sub-symbolic learning with symbolic learning and constraints based on domain knowledge.
- Score: 11.160515561004619
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a novel approach to explainable prediction of a continuous
variable based on learning fuzzy weighted rules. Our model trains a set of
weighted rules to maximise prediction accuracy and minimise an ontology-based
'semantic loss' function including user-specified constraints on the rules that
should be learned in order to maximise the explainability of the resulting rule
set from a user perspective. This system fuses quantitative sub-symbolic
learning with symbolic learning and constraints based on domain knowledge. We
illustrate our system on a case study in predicting the outcomes of behavioural
interventions for smoking cessation, and show that it outperforms other
interpretable approaches, achieving performance close to that of a deep
learning model, while offering transparent explainability that is an essential
requirement for decision-makers in the health domain.
Related papers
- Rule By Example: Harnessing Logical Rules for Explainable Hate Speech
Detection [13.772240348963303]
Rule By Example (RBE) is a novel-based contrastive learning approach for learning from logical rules for the task of textual content moderation.
RBE is capable of providing rule-grounded predictions, allowing for more explainable and customizable predictions compared to typical deep learning-based approaches.
arXiv Detail & Related papers (2023-07-24T16:55:37Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Multicriteria interpretability driven Deep Learning [0.0]
Deep Learning methods are renowned for their performances, yet their lack of interpretability prevents them from high-stakes contexts.
Recent model methods address this problem by providing post-hoc interpretability methods by reverse-engineering the model's inner workings.
We propose a Multicriteria agnostic technique that allows to control the feature effects on the model's outcome by injecting knowledge in the objective function.
arXiv Detail & Related papers (2021-11-28T09:41:13Z) - Pre-emptive learning-to-defer for sequential medical decision-making
under uncertainty [35.077494648756876]
We propose SLTD (Sequential Learning-to-Defer') as a framework for learning-to-defer pre-emptively to an expert in sequential decision-making settings.
SLTD measures the likelihood of improving value of deferring now versus later based on the underlying uncertainty in dynamics.
arXiv Detail & Related papers (2021-09-13T20:43:10Z) - Improving the compromise between accuracy, interpretability and
personalization of rule-based machine learning in medical problems [0.08594140167290096]
We introduce a new component to predict if a given rule will be correct or not for a particular patient, which introduces personalization into the procedure.
The validation results using three public clinical datasets show that it also allows to increase the predictive performance of the selected set of rules.
arXiv Detail & Related papers (2021-06-15T01:19:04Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.