A Novel Tropical Geometry-based Interpretable Machine Learning Method:
Application in Prognosis of Advanced Heart Failure
- URL: http://arxiv.org/abs/2112.05071v1
- Date: Thu, 9 Dec 2021 17:53:12 GMT
- Title: A Novel Tropical Geometry-based Interpretable Machine Learning Method:
Application in Prognosis of Advanced Heart Failure
- Authors: Heming Yao, Harm Derksen, Jessica R. Golbus, Justin Zhang, Keith D.
Aaronson, Jonathan Gryak, and Kayvan Najarian
- Abstract summary: A model's interpretability is essential to many practical applications such as clinical decision support systems.
A novel interpretable machine learning method is presented, which can model the relationship between input variables and responses in humanly understandable rules.
- Score: 4.159216572695661
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A model's interpretability is essential to many practical applications such
as clinical decision support systems. In this paper, a novel interpretable
machine learning method is presented, which can model the relationship between
input variables and responses in humanly understandable rules. The method is
built by applying tropical geometry to fuzzy inference systems, wherein
variable encoding functions and salient rules can be discovered by supervised
learning. Experiments using synthetic datasets were conducted to investigate
the performance and capacity of the proposed algorithm in classification and
rule discovery. Furthermore, the proposed method was applied to a clinical
application that identified heart failure patients that would benefit from
advanced therapies such as heart transplant or durable mechanical circulatory
support. Experimental results show that the proposed network achieved great
performance on the classification tasks. In addition to learning humanly
understandable rules from the dataset, existing fuzzy domain knowledge can be
easily transferred into the network and used to facilitate model training. From
our results, the proposed model and the ability of learning existing domain
knowledge can significantly improve the model generalizability. The
characteristics of the proposed network make it promising in applications
requiring model reliability and justification.
Related papers
- Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Interpretable Meta-Learning of Physical Systems [4.343110120255532]
Recent meta-learning methods rely on black-box neural networks, resulting in high computational costs and limited interpretability.
We argue that multi-environment generalization can be achieved using a simpler learning model, with an affine structure with respect to the learning task.
We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems.
arXiv Detail & Related papers (2023-12-01T10:18:50Z) - Nonparametric Additive Value Functions: Interpretable Reinforcement
Learning with an Application to Surgical Recovery [8.890206493793878]
We propose a nonparametric additive model for estimating interpretable value functions in reinforcement learning.
We validate the proposed approach with a simulation study, and, in an application to spine disease, uncover recovery recommendations that are inline with related clinical knowledge.
arXiv Detail & Related papers (2023-08-25T02:05:51Z) - An interpretable deep learning method for bearing fault diagnosis [12.069344716912843]
We utilize a convolutional neural network (CNN) with Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations to form an interpretable Deep Learning (DL) method for classifying bearing faults.
During the model evaluation process, the proposed approach retrieves prediction basis samples from the health library according to the similarity of the feature importance.
arXiv Detail & Related papers (2023-08-20T15:22:08Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Identifying Learning Rules From Neural Network Observables [26.96375335939315]
We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes.
Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities, may provide a good basis on which to identify learning rules.
arXiv Detail & Related papers (2020-10-22T14:36:54Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.