Explainability-Driven Quality Assessment for Rule-Based Systems
- URL: http://arxiv.org/abs/2502.01253v1
- Date: Mon, 03 Feb 2025 11:26:09 GMT
- Title: Explainability-Driven Quality Assessment for Rule-Based Systems
- Authors: Oshani Seneviratne, Brendan Capuzzo, William Van Woensel,
- Abstract summary: This paper introduces an explanation framework designed to enhance the quality of rules in knowledge-based reasoning systems.
It generates explanations of rule inferences and leverages human interpretation to refine rules.
Its practicality is demonstrated through a use case in finance.
- Score: 0.7303392100830282
- License:
- Abstract: This paper introduces an explanation framework designed to enhance the quality of rules in knowledge-based reasoning systems based on dataset-driven insights. The traditional method for rule induction from data typically requires labor-intensive labeling and data-driven learning. This framework provides an alternative and instead allows for the data-driven refinement of existing rules: it generates explanations of rule inferences and leverages human interpretation to refine rules. It leverages four complementary explanation types: trace-based, contextual, contrastive, and counterfactual, providing diverse perspectives for debugging, validating, and ultimately refining rules. By embedding explainability into the reasoning architecture, the framework enables knowledge engineers to address inconsistencies, optimize thresholds, and ensure fairness, transparency, and interpretability in decision-making processes. Its practicality is demonstrated through a use case in finance.
Related papers
- On the Loss of Context-awareness in General Instruction Fine-tuning [101.03941308894191]
We investigate the loss of context awareness after supervised fine-tuning.
We find that the performance decline is associated with a bias toward different roles learned during conversational instruction fine-tuning.
We propose a metric to identify context-dependent examples from general instruction fine-tuning datasets.
arXiv Detail & Related papers (2024-11-05T00:16:01Z) - A Model-oriented Reasoning Framework for Privacy Analysis of Complex Systems [2.001711587270359]
This paper proposes a reasoning framework for privacy properties of systems and their environments.
It can capture any knowledge leaks on different logical levels to answer the question: which entity can learn what?
arXiv Detail & Related papers (2024-05-14T06:52:56Z) - A Note on an Inferentialist Approach to Resource Semantics [48.65926948745294]
'Inferentialism' is the view that meaning is given in terms of inferential behaviour.
This paper shows how 'inferentialism' enables a versatile and expressive framework for resource semantics.
arXiv Detail & Related papers (2024-05-10T14:13:21Z) - Rule By Example: Harnessing Logical Rules for Explainable Hate Speech
Detection [13.772240348963303]
Rule By Example (RBE) is a novel-based contrastive learning approach for learning from logical rules for the task of textual content moderation.
RBE is capable of providing rule-grounded predictions, allowing for more explainable and customizable predictions compared to typical deep learning-based approaches.
arXiv Detail & Related papers (2023-07-24T16:55:37Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Integrating Prior Knowledge in Post-hoc Explanations [3.6066164404432883]
Post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.
We propose to define a cost function that explicitly integrates prior knowledge into the interpretability objectives.
We propose a new interpretability method called Knowledge Integration in Counterfactual Explanation (KICE) to optimize it.
arXiv Detail & Related papers (2022-04-25T13:09:53Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Rule-based Evolutionary Bayesian Learning [0.802904964931021]
We extend the rule-based Bayesian Regression methodology with grammatical evolution.
Our motivation is that grammatical evolution can potentially detect patterns from the data with valuable information, equivalent to that of expert knowledge.
We illustrate the use of the rule-based Bayesian Evolutionary learning technique by applying it to synthetic as well as real data, and examine the results in terms of point predictions and associated uncertainty.
arXiv Detail & Related papers (2022-02-28T13:24:00Z) - Rule Generation for Classification: Scalability, Interpretability, and Fairness [0.0]
We propose a new rule-based optimization method for classification with constraints.
We address interpretability and fairness by assigning cost coefficients to the rules and introducing additional constraints.
The proposed method exhibits a good compromise between local interpretability and fairness on the one side, and accuracy on the other side.
arXiv Detail & Related papers (2021-04-21T20:31:28Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.