Enhancing Visual Interpretability and Explainability in Functional Survival Trees and Forests
- URL: http://arxiv.org/abs/2504.18498v1
- Date: Fri, 25 Apr 2025 17:11:10 GMT
- Title: Enhancing Visual Interpretability and Explainability in Functional Survival Trees and Forests
- Authors: Giuseppe Loffredo, Elvira Romano, Fabrizio MAturo,
- Abstract summary: This study investigates two key survival models: the Functional Survival Tree (FST) and the Functional Random Survival Forest (FRSF)<n>It introduces novel methods and tools to enhance the interpretability of FST models and improve the explainability of FRSF ensembles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Functional survival models are key tools for analyzing time-to-event data with complex predictors, such as functional or high-dimensional inputs. Despite their predictive strength, these models often lack interpretability, which limits their value in practical decision-making and risk analysis. This study investigates two key survival models: the Functional Survival Tree (FST) and the Functional Random Survival Forest (FRSF). It introduces novel methods and tools to enhance the interpretability of FST models and improve the explainability of FRSF ensembles. Using both real and simulated datasets, the results demonstrate that the proposed approaches yield efficient, easy-to-understand decision trees that accurately capture the underlying decision-making processes of the model ensemble.
Related papers
- Learning Decision Trees as Amortized Structure Inference [59.65621207449269]
We propose a hybrid amortized structure inference approach to learn predictive decision tree ensembles given data.<n>We show that our approach, DT-GFN, outperforms state-of-the-art decision tree and deep learning methods on standard classification benchmarks.
arXiv Detail & Related papers (2025-03-10T07:05:07Z) - Complex LLM Planning via Automated Heuristics Discovery [48.07520536415374]
We consider enhancing large language models (LLMs) for complex planning tasks.<n>We propose automated inferences discovery (AutoHD), a novel approach that enables LLMs to explicitly generate functions to guide-time search.<n>Our proposed method requires no additional model training or finetuning--and the explicit definition of functions generated by the LLMs provides interpretability and insights into the reasoning process.
arXiv Detail & Related papers (2025-02-26T16:52:31Z) - XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Deep End-to-End Survival Analysis with Temporal Consistency [49.77103348208835]
We present a novel Survival Analysis algorithm designed to efficiently handle large-scale longitudinal data.
A central idea in our method is temporal consistency, a hypothesis that past and future outcomes in the data evolve smoothly over time.
Our framework uniquely incorporates temporal consistency into large datasets by providing a stable training signal.
arXiv Detail & Related papers (2024-10-09T11:37:09Z) - FPBoost: Fully Parametric Gradient Boosting for Survival Analysis [4.09225917049674]
FPBoost is a survival model that combines a weighted sum of fully parametric hazard functions with gradient boosting.<n>We show how FPBoost is a universal approximator of hazard functions, offering full event-time modeling flexibility.
arXiv Detail & Related papers (2024-09-20T09:57:17Z) - Demystifying Functional Random Forests: Novel Explainability Tools for Model Transparency in High-Dimensional Spaces [0.0]
This paper introduces a novel suite of explainability tools to illuminate the inner mechanisms of Functional Random Forests (FRF)
These tools collectively enhance the transparency of FRF models by providing a detailed analysis of how individual FPCs contribute to model predictions.
arXiv Detail & Related papers (2024-08-22T10:52:32Z) - Random Survival Forest for Censored Functional Data [0.0]
This paper introduces a Random Survival Forest (RSF) method for functional data.
The focus is specifically on defining a new functional data structure, the Censored Functional Data (CFD)
This approach allows for precise modelling of functional survival trajectories, leading to improved interpretation and prediction of survival dynamics across different groups.
arXiv Detail & Related papers (2024-07-22T02:54:06Z) - Composite Survival Analysis: Learning with Auxiliary Aggregated
Baselines and Survival Scores [0.0]
Survival Analysis (SA) constitutes the default method for time-to-event modeling.
We show how to improve the training and inference of SA models by decoupling their full expression into (1) an aggregated baseline hazard, which captures the overall behavior of a given population, and (2) independently distributed survival scores, which model idiosyncratic probabilistic dynamics of its given members, in a fully parametric setting.
arXiv Detail & Related papers (2023-12-10T11:13:22Z) - Learning Survival Distribution with Implicit Survival Function [15.588273962274393]
We propose Implicit Survival Function (ISF) based on Implicit Neural Representation for survival distribution estimation without strong assumptions.
Experimental results show ISF outperforms the state-of-the-art methods in three public datasets.
arXiv Detail & Related papers (2023-05-24T02:51:29Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Deep Active Learning with Noise Stability [24.54974925491753]
Uncertainty estimation for unlabeled data is crucial to active learning.
We propose a novel algorithm that leverages noise stability to estimate data uncertainty.
Our method is generally applicable in various tasks, including computer vision, natural language processing, and structural data analysis.
arXiv Detail & Related papers (2022-05-26T13:21:01Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.