PRISM: A 3D Probabilistic Neural Representation for Interpretable Shape Modeling
- URL: http://arxiv.org/abs/2602.11467v1
- Date: Thu, 12 Feb 2026 00:55:31 GMT
- Title: PRISM: A 3D Probabilistic Neural Representation for Interpretable Shape Modeling
- Authors: Yining Jiao, Sreekalyani Bhamidi, Carlton Jude Zdanski, Julia S Kimbell, Andrew Prince, Cameron P Worden, Samuel Kirse, Christopher Rutter, Benjamin H Shields, Jisan Mahmud, Marc Niethammer,
- Abstract summary: PRISM is a novel framework that bridges implicit neural representations with uncertainty-aware statistical shape analysis.<n>A key theoretical contribution is a closed-form Fisher Information metric that enables efficient, analytically tractable local temporal uncertainty quantification.
- Score: 9.456135223836181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how anatomical shapes evolve in response to developmental covariates and quantifying their spatially varying uncertainties is critical in healthcare research. Existing approaches typically rely on global time-warping formulations that ignore spatially heterogeneous dynamics. We introduce PRISM, a novel framework that bridges implicit neural representations with uncertainty-aware statistical shape analysis. PRISM models the conditional distribution of shapes given covariates, providing spatially continuous estimates of both the population mean and covariate-dependent uncertainty at arbitrary locations. A key theoretical contribution is a closed-form Fisher Information metric that enables efficient, analytically tractable local temporal uncertainty quantification via automatic differentiation. Experiments on three synthetic datasets and one clinical dataset demonstrate PRISM's strong performance across diverse tasks within a unified framework, while providing interpretable and clinically meaningful uncertainty estimates.
Related papers
- Generative Modeling of Clinical Time Series via Latent Stochastic Differential Equations [0.5753241925582826]
We propose a generative modeling framework that views clinical time series as discrete-time partial observations of an underlying controlled dynamical system.<n>Our approach models latent dynamics via neural SDEs with modality-dependent emission models, while performing state estimation and parameter learning.<n>This formulation naturally sampled observations, learns complex non-linear interactions, and captures the complementaryity of disease progression and measurement noise.
arXiv Detail & Related papers (2025-11-20T14:50:49Z) - Counterfactual Explanations in Medical Imaging: Exploring SPN-Guided Latent Space Manipulation [2.9810923705287524]
In medical image analysis, deep learning models have demonstrated remarkable performance.<n>Deep generative models such as variational autoencoders (VAEs) exhibit significant generative power.<n>Probability models like sum-product networks (SPNs) efficiently represent complex joint probability distributions.
arXiv Detail & Related papers (2025-07-25T15:19:32Z) - Nonparametric Factor Analysis and Beyond [14.232694150264628]
We propose a general framework for identifying latent variables in the non-negligible settings.<n>We show that the generative model is identifiable up to certain submanifold indeterminacies even in the presence of non-negligible noise.<n>We have also developed corresponding estimation methods and validated them in various synthetic and real-world settings.
arXiv Detail & Related papers (2025-03-21T05:45:03Z) - LucidAtlas$: Learning Uncertainty-Aware, Covariate-Disentangled, Individualized Atlas Representations [30.072620549688953]
We develop $texttLucidAtlas$, an approach that can represent spatially varying information.<n>Our findings underscore the critical role of by-construction interpretable models in advancing scientific discovery.
arXiv Detail & Related papers (2025-02-12T14:36:25Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - From Images to Probabilistic Anatomical Shapes: A Deep Variational
Bottleneck Approach [0.0]
Statistical shape modeling (SSM) directly from 3D medical images is an underutilized tool for detecting pathology, diagnosing disease, and conducting population-level morphology analysis.
In this paper, we propose a principled framework based on the variational information bottleneck theory to relax these assumptions.
Our experiments demonstrate that the proposed method provides improved accuracy and better calibrated aleatoric uncertainty estimates.
arXiv Detail & Related papers (2022-05-13T19:39:08Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.