An Optimization-Based Meta-Learning Model for MRI Reconstruction with
Diverse Dataset
- URL: http://arxiv.org/abs/2110.00715v1
- Date: Sat, 2 Oct 2021 03:21:52 GMT
- Title: An Optimization-Based Meta-Learning Model for MRI Reconstruction with
Diverse Dataset
- Authors: Wanyu Bian, Yunmei Chen, Xiaojing Ye, Qingchao Zhang
- Abstract summary: We develop a generalizable MRI reconstruction model in the meta-learning framework.
The proposed network learns regularization function in a learner adaptional model.
We test the result of quick training on the unseen tasks after meta-training and in the saving half of the time.
- Score: 4.9259403018534496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: This work aims at developing a generalizable MRI reconstruction
model in the meta-learning framework. The standard benchmarks in meta-learning
are challenged by learning on diverse task distributions. The proposed network
learns the regularization function in a variational model and reconstructs MR
images with various under-sampling ratios or patterns that may or may not be
seen in the training data by leveraging a heterogeneous dataset. Methods: We
propose an unrolling network induced by learnable optimization algorithms (LOA)
for solving our nonconvex nonsmooth variational model for MRI reconstruction.
In this model, the learnable regularization function contains a task-invariant
common feature encoder and task-specific learner represented by a shallow
network. To train the network we split the training data into two parts:
training and validation, and introduce a bilevel optimization algorithm. The
lower-level optimization trains task-invariant parameters for the feature
encoder with fixed parameters of the task-specific learner on the training
dataset, and the upper-level optimizes the parameters of the task-specific
learner on the validation dataset. Results: The average PSNR increases
significantly compared to the network trained through conventional supervised
learning on the seen CS ratios. We test the result of quick adaption on the
unseen tasks after meta-training and in the meanwhile saving half of the
training time; Conclusion: We proposed a meta-learning framework consisting of
the base network architecture, design of regularization, and bi-level
optimization-based training. The network inherits the convergence property of
the LOA and interpretation of the variational model. The generalization ability
is improved by the designated regularization and bilevel optimization-based
training algorithm.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Transfer Learning with Reconstruction Loss [12.906500431427716]
This paper proposes a novel approach for model training by adding into the model an additional reconstruction stage associated with a new reconstruction loss.
The proposed approach encourages the learned features to be general and transferable, and therefore can be readily used for efficient transfer learning.
For numerical simulations, three applications are studied: transfer learning on classifying MNIST handwritten digits, the device-to-device wireless network power allocation, and the multiple-input-single-output network downlink beamforming and localization.
arXiv Detail & Related papers (2024-03-31T00:22:36Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - Task Aware Modulation using Representation Learning: An Approach for Few Shot Learning in Environmental Systems [15.40286222692196]
TAM-RL is a novel framework for few-shot learning in heterogeneous systems.
We evaluate TAM-RL on two real-world environmental datasets.
arXiv Detail & Related papers (2023-10-07T07:55:22Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Meta Feature Modulator for Long-tailed Recognition [37.90990378643794]
We propose a meta-learning framework to model the difference between the long-tailed training data and the balanced meta data from the perspective of representation learning.
We further design a modulator network to guide the generation of the modulation parameters, and such a meta-learner can be readily adapted to train the classification network on other long-tailed datasets.
arXiv Detail & Related papers (2020-08-08T03:19:03Z) - A Differential Game Theoretic Neural Optimizer for Training Residual
Networks [29.82841891919951]
We propose a generalized Differential Dynamic Programming (DDP) neural architecture that accepts both residual connections and convolution layers.
The resulting optimal control representation admits a gameoretic perspective, in which training residual networks can be interpreted as cooperative trajectory optimization on state-augmented systems.
arXiv Detail & Related papers (2020-07-17T10:19:17Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z) - Meta-learning framework with applications to zero-shot time-series
forecasting [82.61728230984099]
This work provides positive evidence using a broad meta-learning framework.
residual connections act as a meta-learning adaptation mechanism.
We show that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining.
arXiv Detail & Related papers (2020-02-07T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.