Uncertainty-Aware Self-supervised Neural Network for Liver $T_{1\rho}$
Mapping with Relaxation Constraint
- URL: http://arxiv.org/abs/2207.03105v1
- Date: Thu, 7 Jul 2022 06:10:34 GMT
- Title: Uncertainty-Aware Self-supervised Neural Network for Liver $T_{1\rho}$
Mapping with Relaxation Constraint
- Authors: Chaoxing Huang, Yurui Qian, Simon Chun Ho Yu, Jian Hou, Baiyan Jiang,
Queenie Chan, Vincent Wai-Sun Wong, Winnie Chiu-Wing Chu, Weitian Chen
- Abstract summary: Learning-based approaches can map $T_1rho$ from a reduced number of $T_1rho$ weighted images.
Existing methods do not provide the confidence level of the $T_1rho$ estimation.
We propose a self-supervised learning neural network that learns a $T_1rho$ mapping using the relaxation constraint in the learning process.
- Score: 2.0131108402156146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: $T_{1\rho}$ mapping is a promising quantitative MRI technique for the
non-invasive assessment of tissue properties. Learning-based approaches can map
$T_{1\rho}$ from a reduced number of $T_{1\rho}$ weighted images, but requires
significant amounts of high quality training data. Moreover, existing methods
do not provide the confidence level of the $T_{1\rho}$ estimation. To address
these problems, we proposed a self-supervised learning neural network that
learns a $T_{1\rho}$ mapping using the relaxation constraint in the learning
process. Epistemic uncertainty and aleatoric uncertainty are modelled for the
$T_{1\rho}$ quantification network to provide a Bayesian confidence estimation
of the $T_{1\rho}$ mapping. The uncertainty estimation can also regularize the
model to prevent it from learning imperfect data. We conducted experiments on
$T_{1\rho}$ data collected from 52 patients with non-alcoholic fatty liver
disease. The results showed that our method outperformed the existing methods
for $T_{1\rho}$ quantification of the liver using as few as two
$T_{1\rho}$-weighted images. Our uncertainty estimation provided a feasible way
of modelling the confidence of the self-supervised learning based $T_{1\rho}$
estimation, which is consistent with the reality in liver $T_{1\rho}$ imaging.
Related papers
- Certified Robustness Under Bounded Levenshtein Distance [55.54271307451233]
We propose the first method for computing the Lipschitz constant of convolutional classifiers with respect to the Levenshtein distance.
Our method, LipsLev, is able to obtain $38.80$% and $13.93$% verified accuracy at distance $1$ and $2$ respectively.
arXiv Detail & Related papers (2025-01-23T13:58:53Z) - Active Subsampling for Measurement-Constrained M-Estimation of Individualized Thresholds with High-Dimensional Data [3.1138411427556445]
In the measurement-constrained problems, despite the availability of large datasets, we may be only affordable to observe the labels on a small portion of the large dataset.
This poses a critical question that which data points are most beneficial to label given a budget constraint.
In this paper, we focus on the estimation of the optimal individualized threshold in a measurement-constrained M-estimation framework.
arXiv Detail & Related papers (2024-11-21T00:21:17Z) - Transfer Learning for Latent Variable Network Models [18.31057192626801]
We study transfer learning for estimation in latent variable network models.
We show that if the latent variables are shared, then vanishing error is possible.
Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks.
arXiv Detail & Related papers (2024-06-05T16:33:30Z) - Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent [83.85536329832722]
We show that gradient descent (SGD) can efficiently solve the $k$-parity problem on a $d$dimensional hypercube.
We then demonstrate how a trained neural network with SGD, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - Collaborative non-parametric two-sample testing [55.98760097296213]
The goal is to identify nodes where the null hypothesis $p_v = q_v$ should be rejected.
We propose the non-parametric collaborative two-sample testing (CTST) framework that efficiently leverages the graph structure.
Our methodology integrates elements from f-divergence estimation, Kernel Methods, and Multitask Learning.
arXiv Detail & Related papers (2024-02-08T14:43:56Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - An Uncertainty Aided Framework for Learning based Liver $T_1\rho$
Mapping and Analysis [0.7087237546722617]
We propose a learning-based quantitative MRI system for trustworthy mapping of the liver.
The framework was tested on a dataset of 51 patients with different liver fibrosis stages.
arXiv Detail & Related papers (2023-07-06T02:44:32Z) - On the Identifiability and Estimation of Causal Location-Scale Noise
Models [122.65417012597754]
We study the class of location-scale or heteroscedastic noise models (LSNMs)
We show the causal direction is identifiable up to some pathological cases.
We propose two estimators for LSNMs: an estimator based on (non-linear) feature maps, and one based on neural networks.
arXiv Detail & Related papers (2022-10-13T17:18:59Z) - Supervised Training of Conditional Monge Maps [107.78770597815242]
Optimal transport (OT) theory describes general principles to define and select, among many possible choices, the most efficient way to map a probability measure onto another.
We introduce CondOT, a multi-task approach to estimate a family of OT maps conditioned on a context variable.
We demonstrate the ability of CondOT to infer the effect of an arbitrary combination of genetic or therapeutic perturbations on single cells.
arXiv Detail & Related papers (2022-06-28T19:34:44Z) - Time-Constrained Learning [3.9093825078189006]
We present an experimental study involving 5 different learners and 20 datasets.
We show that TCT consistently outperforms two other algorithms.
While our work is primarily practical, we also show that a stripped-down version of TCT has provable guarantees.
arXiv Detail & Related papers (2022-02-04T00:15:01Z) - Differentiable Linear Bandit Algorithm [6.849358422233866]
Upper Confidence Bound is arguably the most commonly used method for linear multi-arm bandit problems.
We introduce a gradient estimator, which allows the confidence bound to be learned via gradient ascent.
We show that the proposed algorithm achieves a $tildemathcalO(hatbetasqrtdT)$ upper bound of $T$-round regret, where $d$ is the dimension of arm features and $hatbeta$ is the learned size of confidence bound.
arXiv Detail & Related papers (2020-06-04T16:43:55Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.