Statistical Exploration of Relationships Between Routine and Agnostic
Features Towards Interpretable Risk Characterization
- URL: http://arxiv.org/abs/2001.10353v1
- Date: Tue, 28 Jan 2020 14:27:09 GMT
- Title: Statistical Exploration of Relationships Between Routine and Agnostic
Features Towards Interpretable Risk Characterization
- Authors: Eric Wolsztynski
- Abstract summary: How do we interpret the prognostic model for clinical implementation?
How can we identify potential information structures within sets of radiomic features?
And how can we recombine or exploit potential relationships between features towards improved interpretability?
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As is typical in other fields of application of high throughput systems,
radiology is faced with the challenge of interpreting increasingly
sophisticated predictive models such as those derived from radiomics analyses.
Interpretation may be guided by the learning output from machine learning
models, which may however vary greatly with each technique. Whatever this
output model, it will raise some essential questions. How do we interpret the
prognostic model for clinical implementation? How can we identify potential
information structures within sets of radiomic features, in order to create
clinically interpretable models? And how can we recombine or exploit potential
relationships between features towards improved interpretability? A number of
statistical techniques are explored to assess (possibly nonlinear)
relationships between radiological features from different angles.
Related papers
- Automated Radiology Report Generation: A Review of Recent Advances [5.965255286239531]
Recent technological advances in artificial intelligence have demonstrated great potential for automatic radiology report generation.
Recent advances in artificial intelligence have demonstrated great potential for automatic radiology report generation.
arXiv Detail & Related papers (2024-05-17T15:06:08Z) - Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach [0.0]
We use Human-in-the-Loop related techniques and medical guidelines as a source of domain knowledge to establish the importance of the different features that are relevant to establish a pancreatic cancer treatment.
We propose the use of similarity measures such as the weighted Jaccard Similarity coefficient to facilitate interpretation of explanatory results.
arXiv Detail & Related papers (2024-03-28T20:11:34Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - survex: an R package for explaining machine learning survival models [8.028581359682239]
We introduce the survex R package, which provides a framework for explaining any survival model by applying artificial intelligence techniques.
The capabilities of the proposed software encompass understanding and diagnosing survival models, which can lead to their improvement.
arXiv Detail & Related papers (2023-08-30T16:14:20Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Analyzing the Effects of Handling Data Imbalance on Learned Features
from Medical Images by Looking Into the Models [50.537859423741644]
Training a model on an imbalanced dataset can introduce unique challenges to the learning problem.
We look deeper into the internal units of neural networks to observe how handling data imbalance affects the learned features.
arXiv Detail & Related papers (2022-04-04T09:38:38Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.