Improving the compromise between accuracy, interpretability and
personalization of rule-based machine learning in medical problems
- URL: http://arxiv.org/abs/2106.07827v1
- Date: Tue, 15 Jun 2021 01:19:04 GMT
- Title: Improving the compromise between accuracy, interpretability and
personalization of rule-based machine learning in medical problems
- Authors: Francisco Valente, Simao Paredes, Jorge Henriques
- Abstract summary: We introduce a new component to predict if a given rule will be correct or not for a particular patient, which introduces personalization into the procedure.
The validation results using three public clinical datasets show that it also allows to increase the predictive performance of the selected set of rules.
- Score: 0.08594140167290096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the key challenges when developing a predictive model is the
capability to describe the domain knowledge and the cause-effect relationships
in a simple way. Decision rules are a useful and important methodology in this
context, justifying their application in several areas, in particular in
clinical practice. Several machine-learning classifiers have exploited the
advantageous properties of decision rules to build intelligent prediction
models, namely decision trees and ensembles of trees (ETs). However, such
methodologies usually suffer from a trade-off between interpretability and
predictive performance. Some procedures consider a simplification of ETs, using
heuristic approaches to select an optimal reduced set of decision rules. In
this paper, we introduce a novel step to those methodologies. We create a new
component to predict if a given rule will be correct or not for a particular
patient, which introduces personalization into the procedure. Furthermore, the
validation results using three public clinical datasets show that it also
allows to increase the predictive performance of the selected set of rules,
improving the mentioned trade-off.
Related papers
- Systematic Characterization of the Effectiveness of Alignment in Large Language Models for Categorical Decisions [0.0]
This paper applies a systematic methodology for evaluating preference alignment in large language models (LLMs) in categorical decision-making with medical triage.
It also measures how effectively an alignment procedure will change the alignment of a specific model.
The results reveal significant variability in alignment effectiveness across models and alignment approaches.
arXiv Detail & Related papers (2024-09-18T19:03:04Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Assisting clinical practice with fuzzy probabilistic decision trees [2.0999441362198907]
We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
arXiv Detail & Related papers (2023-04-16T14:05:16Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Clinical outcome prediction under hypothetical interventions -- a
representation learning framework for counterfactual reasoning [31.97813934144506]
We introduce a new representation learning framework, which considers the provision of counterfactual explanations as an embedded property of the risk model.
Our results suggest that our proposed framework has the potential to help researchers and clinicians improve personalised care.
arXiv Detail & Related papers (2022-05-15T09:41:16Z) - Predictive machine learning for prescriptive applications: a coupled
training-validating approach [77.34726150561087]
We propose a new method for training predictive machine learning models for prescriptive applications.
This approach is based on tweaking the validation step in the standard training-validating-testing scheme.
Several experiments with synthetic data demonstrate promising results in reducing the prescription costs in both deterministic and real models.
arXiv Detail & Related papers (2021-10-22T15:03:20Z) - Fair Conformal Predictors for Applications in Medical Imaging [4.236384785644418]
Conformal methods can complement deep learning models by providing both clinically intuitive way of expressing model uncertainty.
We conduct experiments with a mammographic breast density and dermatology photography datasets to demonstrate the utility of conformal predictions.
We find that conformal predictors can be used to equalize coverage with respect to patient demographics such as race and skin tone.
arXiv Detail & Related papers (2021-09-09T16:31:10Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - (Un)fairness in Post-operative Complication Prediction Models [20.16366948502659]
We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms.
Our approach creates transparent documentation of potential bias so that the users can apply the model carefully.
arXiv Detail & Related papers (2020-11-03T22:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.