Light-weight Deformable Registration using Adversarial Learning with
Distilling Knowledge
- URL: http://arxiv.org/abs/2110.01293v1
- Date: Mon, 4 Oct 2021 09:59:01 GMT
- Title: Light-weight Deformable Registration using Adversarial Learning with
Distilling Knowledge
- Authors: Minh Q. Tran, Tuong Do, Huy Tran, Erman Tjiputra, Quang D. Tran, Anh
Nguyen
- Abstract summary: We introduce a new Light-weight Deformable Registration network that significantly reduces the computational cost while achieving competitive accuracy.
In particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network.
The extensively experimental results on different public datasets show that our proposed method achieves state-of-the-art accuracy while significantly faster than recent methods.
- Score: 17.475408305030278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deformable registration is a crucial step in many medical procedures such as
image-guided surgery and radiation therapy. Most recent learning-based methods
focus on improving the accuracy by optimizing the non-linear spatial
correspondence between the input images. Therefore, these methods are
computationally expensive and require modern graphic cards for real-time
deployment. In this paper, we introduce a new Light-weight Deformable
Registration network that significantly reduces the computational cost while
achieving competitive accuracy. In particular, we propose a new adversarial
learning with distilling knowledge algorithm that successfully leverages
meaningful information from the effective but expensive teacher network to the
student network. We design the student network such as it is light-weight and
well suitable for deployment on a typical CPU. The extensively experimental
results on different public datasets show that our proposed method achieves
state-of-the-art accuracy while significantly faster than recent methods. We
further show that the use of our adversarial learning algorithm is essential
for a time-efficiency deformable registration method. Finally, our source code
and trained models are available at: https://github.com/aioz-ai/LDR_ALDK.
Related papers
- Recurrent Inference Machine for Medical Image Registration [11.351457718409788]
We propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network.
RIIR is formulated as a meta-learning solver to the registration problem in an iterative manner.
Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only $5%$ of the training data.
arXiv Detail & Related papers (2024-06-19T10:06:35Z) - Scheduled Knowledge Acquisition on Lightweight Vector Symbolic Architectures for Brain-Computer Interfaces [18.75591257735207]
Classical feature engineering is computationally efficient but has low accuracy, whereas the recent neural networks (DNNs) improve accuracy but are computationally expensive and incur high latency.
As a promising alternative, the low-dimensional computing (LDC) classifier based on vector symbolic architecture (VSA), achieves small model size yet higher accuracy than classical feature engineering methods.
arXiv Detail & Related papers (2024-03-18T01:06:29Z) - Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer
Level Attack and Knowledge Distillation [21.587358050012032]
We propose a fast and novel machine unlearning paradigm at the layer level called layer attack unlearning.
In this work, we introduce the Partial-PGD algorithm to locate the samples to forget efficiently.
We also use Knowledge Distillation (KD) to reliably learn the decision boundaries from the teacher.
arXiv Detail & Related papers (2023-12-28T04:38:06Z) - Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs [75.40636935415601]
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs.
We take an incremental computing approach, looking to reuse calculations as the inputs change.
We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of modified inputs.
arXiv Detail & Related papers (2023-07-27T16:30:27Z) - Efficient human-in-loop deep learning model training with iterative
refinement and statistical result validation [0.0]
We demonstrate a method for creating segmentations, a necessary part of a data cleaning for ultrasound imaging machine learning pipelines.
We propose a four-step method to leverage automatically generated training data and fast human visual checks to improve model accuracy while keeping the time/effort and cost low.
The method is demonstrated on a cardiac ultrasound segmentation task, removing background data, including static PHI.
arXiv Detail & Related papers (2023-04-03T13:56:01Z) - Data Efficient Contrastive Learning in Histopathology using Active Sampling [0.0]
Deep learning algorithms can provide robust quantitative analysis in digital pathology.
These algorithms require large amounts of annotated training data.
Self-supervised methods have been proposed to learn features using ad-hoc pretext tasks.
We propose a new method for actively sampling informative members from the training set using a small proxy network.
arXiv Detail & Related papers (2023-03-28T18:51:22Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.