Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI
Data with Self-Supervised Machine Learning
- URL: http://arxiv.org/abs/2210.02349v1
- Date: Wed, 5 Oct 2022 15:51:39 GMT
- Title: Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI
Data with Self-Supervised Machine Learning
- Authors: Jason P. Lim and Stefano B. Blumberg and Neil Narayan and Sean C.
Epstein and Daniel C. Alexander and Marco Palombo and Paddy J. Slator
- Abstract summary: Self-supervised machine learning is emerging as an attractive alternative to supervised learning.
In this paper, we demonstrate self-supervised machine learning model fitting for a directional microstructural model.
Our approach shows clear improvements in parameter estimation and computational time, compared to standard non-linear least squares fitting.
- Score: 2.8167227950959206
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning is a powerful approach for fitting microstructural models to
diffusion MRI data. Early machine learning microstructure imaging
implementations trained regressors to estimate model parameters in a supervised
way, using synthetic training data with known ground truth. However, a drawback
of this approach is that the choice of training data impacts fitted parameter
values. Self-supervised learning is emerging as an attractive alternative to
supervised learning in this context. Thus far, both supervised and
self-supervised learning have typically been applied to isotropic models, such
as intravoxel incoherent motion (IVIM), as opposed to models where the
directionality of anisotropic structures is also estimated. In this paper, we
demonstrate self-supervised machine learning model fitting for a directional
microstructural model. In particular, we fit a combined T1-ball-stick model to
the multidimensional diffusion (MUDI) challenge diffusion-relaxation dataset.
Our self-supervised approach shows clear improvements in parameter estimation
and computational time, for both simulated and in-vivo brain data, compared to
standard non-linear least squares fitting. Code for the artificial neural net
constructed for this study is available for public use from the following
GitHub repository: https://github.com/jplte/deep-T1-ball-stick
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Training Structured Mechanical Models by Minimizing Discrete
Euler-Lagrange Residual [36.52097893036073]
Structured Mechanical Models (SMMs) are a data-efficient black-box parameterization of mechanical systems.
We propose a methodology for fitting SMMs to data by minimizing the discrete Euler-Lagrange residual.
Experiments show that our methodology learns models that are better in accuracy to those of the conventional schemes for fitting SMMs.
arXiv Detail & Related papers (2021-05-05T00:44:01Z) - Iterative Semi-parametric Dynamics Model Learning For Autonomous Racing [2.40966076588569]
We develop and apply an iterative learning semi-parametric model, with a neural network, to the task of autonomous racing.
We show that our model can learn more accurately than a purely parametric model and generalize better than a purely non-parametric model.
arXiv Detail & Related papers (2020-11-17T16:24:10Z) - A machine learning based plasticity model using proper orthogonal
decomposition [0.0]
Data-driven material models have many advantages over classical numerical approaches.
One approach to develop a data-driven material model is to use machine learning tools.
A machine learning based material modelling framework is proposed for both elasticity and plasticity.
arXiv Detail & Related papers (2020-01-07T15:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.