Few-shot Transfer Learning for Holographic Image Reconstruction using a
Recurrent Neural Network
- URL: http://arxiv.org/abs/2201.11333v1
- Date: Thu, 27 Jan 2022 05:51:36 GMT
- Title: Few-shot Transfer Learning for Holographic Image Reconstruction using a
Recurrent Neural Network
- Authors: Luzhe Huang, Xilin Yang, Tairan Liu, Aydogan Ozcan
- Abstract summary: We show a few-shot transfer learning method that helps a holographic image reconstruction deep neural network rapidly generalize to new types of samples using small datasets.
We validated the effectiveness of this approach by successfully generalizing to new types of samples using small holographic datasets for training.
- Score: 0.30586855806896046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based methods in computational microscopy have been shown to be
powerful but in general face some challenges due to limited generalization to
new types of samples and requirements for large and diverse training data.
Here, we demonstrate a few-shot transfer learning method that helps a
holographic image reconstruction deep neural network rapidly generalize to new
types of samples using small datasets. We pre-trained a convolutional recurrent
neural network on a large dataset with diverse types of samples, which serves
as the backbone model. By fixing the recurrent blocks and transferring the rest
of the convolutional blocks of the pre-trained model, we reduced the number of
trainable parameters by ~90% compared with standard transfer learning, while
achieving equivalent generalization. We validated the effectiveness of this
approach by successfully generalizing to new types of samples using small
holographic datasets for training, and achieved (i) ~2.5-fold convergence speed
acceleration, (ii) ~20% computation time reduction per epoch, and (iii)
improved reconstruction performance over baseline network models trained from
scratch. This few-shot transfer learning approach can potentially be applied in
other microscopic imaging methods, helping to generalize to new types of
samples without the need for extensive training time and data.
Related papers
- Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - With Greater Distance Comes Worse Performance: On the Perspective of
Layer Utilization and Model Generalization [3.6321778403619285]
Generalization of deep neural networks remains one of the main open problems in machine learning.
Early layers generally learn representations relevant to performance on both training data and testing data.
Deeper layers only minimize training risks and fail to generalize well with testing or mislabeled data.
arXiv Detail & Related papers (2022-01-28T05:26:32Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Transfer of Pretrained Model Weights Substantially Improves
Semi-Supervised Image Classification [3.492636597449942]
Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples.
Deep neural networks tend to overfit when small amounts of labeled examples are used for training.
We show that transfer learning always substantially improves the model's accuracy when few labeled examples are available.
arXiv Detail & Related papers (2021-09-02T08:58:34Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Two-Stage Resampling for Convolutional Neural Network Training in the
Imbalanced Colorectal Cancer Image Classification [1.8275108630751844]
Data imbalance is one of the open challenges in the contemporary machine learning.
Traditional data-level approaches for dealing with data imbalance are ill-suited for image data.
We propose a novel two-stage resampling methodology to alleviate the problems associated with over- and undersampling.
arXiv Detail & Related papers (2020-04-07T13:11:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.