MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning
- URL: http://arxiv.org/abs/2409.10394v1
- Date: Mon, 16 Sep 2024 15:31:04 GMT
- Title: MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning
- Authors: Hwihun Jeong, Se Young Chun, Jongho Lee,
- Abstract summary: Cascading separately trained reconstruction network and downstream task network has been shown to introduce performance degradation.
We extend this optimization to sequentially introduced multiple downstream tasks and demonstrate that a single MR reconstruction network can be optimized for multiple downstream tasks.
- Score: 12.0749219807816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based Magnetic Resonance (MR) reconstruction methods have focused on generating high-quality images but they often overlook the impact on downstream tasks (e.g., segmentation) that utilize the reconstructed images. Cascading separately trained reconstruction network and downstream task network has been shown to introduce performance degradation due to error propagation and domain gaps between training datasets. To mitigate this issue, downstream task-oriented reconstruction optimization has been proposed for a single downstream task. Expanding this optimization to multi-task scenarios is not straightforward. In this work, we extended this optimization to sequentially introduced multiple downstream tasks and demonstrated that a single MR reconstruction network can be optimized for multiple downstream tasks by deploying continual learning (MOST). MOST integrated techniques from replay-based continual learning and image-guided loss to overcome catastrophic forgetting. Comparative experiments demonstrated that MOST outperformed a reconstruction network without finetuning, a reconstruction network with na\"ive finetuning, and conventional continual learning methods. This advancement empowers the application of a single MR reconstruction network for multiple downstream tasks. The source code is available at: https://github.com/SNU-LIST/MOST
Related papers
- AdaIR: Exploiting Underlying Similarities of Image Restoration Tasks with Adapters [57.62742271140852]
AdaIR is a novel framework that enables low storage cost and efficient training without sacrificing performance.
AdaIR requires solely the training of lightweight, task-specific modules, ensuring a more efficient storage and training regimen.
arXiv Detail & Related papers (2024-04-17T15:31:06Z) - A Lightweight Recurrent Learning Network for Sustainable Compressed
Sensing [27.964167481909588]
We propose a lightweight but effective deep neural network based on recurrent learning to achieve a sustainable CS system.
Our proposed model can achieve a better reconstruction quality than existing state-of-the-art CS algorithms.
arXiv Detail & Related papers (2023-04-23T14:54:15Z) - ClassPruning: Speed Up Image Restoration Networks by Dynamic N:M Pruning [25.371802581339576]
ClassPruning can help existing methods save approximately 40% FLOPs while maintaining performance.
We propose a novel training strategy along with two additional loss terms to stabilize training and improve performance.
arXiv Detail & Related papers (2022-11-10T11:14:15Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - An End-To-End-Trainable Iterative Network Architecture for Accelerated
Radial Multi-Coil 2D Cine MR Image Reconstruction [4.233498905999929]
We propose a CNN-architecture for image reconstruction of accelerated 2D radial cine MRI with multiple receiver coils.
We investigate the proposed training-strategy and compare our method to other well-known reconstruction techniques with learned and non-learned regularization methods.
arXiv Detail & Related papers (2021-02-01T11:42:04Z) - Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted
Deep Learning [14.62432715967572]
We develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset.
We then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data.
Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation.
arXiv Detail & Related papers (2020-11-27T09:08:05Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.