Ensemble of Pre-Trained Neural Networks for Segmentation and Quality
Detection of Transmission Electron Microscopy Images
- URL: http://arxiv.org/abs/2209.01908v1
- Date: Mon, 5 Sep 2022 11:15:25 GMT
- Title: Ensemble of Pre-Trained Neural Networks for Segmentation and Quality
Detection of Transmission Electron Microscopy Images
- Authors: Arun Baskaran, Yulin Lin, Jianguo Wen, Maria K.Y. Chan
- Abstract summary: Two types of ensembles of pre-trained neural networks were implemented in this work.
The ensembles performed semantic segmentation of ice crystal within a two-phase mixture.
The performance of EA and ER were evaluated on three different metrics: accuracy, calibration, and uncertainty.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated analysis of electron microscopy datasets poses multiple challenges,
such as limitation in the size of the training dataset, variation in data
distribution induced by variation in sample quality and experiment conditions,
etc. It is crucial for the trained model to continue to provide acceptable
segmentation/classification performance on new data, and quantify the
uncertainty associated with its predictions. Among the broad applications of
machine learning, various approaches have been adopted to quantify uncertainty,
such as Bayesian modeling, Monte Carlo dropout, ensembles, etc. With the aim of
addressing the challenges specific to the data domain of electron microscopy,
two different types of ensembles of pre-trained neural networks were
implemented in this work. The ensembles performed semantic segmentation of ice
crystal within a two-phase mixture, thereby tracking its phase transformation
to water. The first ensemble (EA) is composed of U-net style networks having
different underlying architectures, whereas the second series of ensembles
(ER-i) are composed of randomly initialized U-net style networks, wherein each
base learner has the same underlying architecture 'i'. The encoders of the base
learners were pre-trained on the Imagenet dataset. The performance of EA and ER
were evaluated on three different metrics: accuracy, calibration, and
uncertainty. It is seen that EA exhibits a greater classification accuracy and
is better calibrated, as compared to ER. While the uncertainty quantification
of these two types of ensembles are comparable, the uncertainty scores
exhibited by ER were found to be dependent on the specific architecture of its
base member ('i') and not consistently better than EA. Thus, the challenges
posed for the analysis of electron microscopy datasets appear to be better
addressed by an ensemble design like EA, as compared to an ensemble design like
ER.
Related papers
- PRISM: Exploring Heterogeneous Pretrained EEG Foundation Model Transfer to Clinical Differential Diagnosis [5.616707402426108]
We introduce PRISM, a masked autoencoder ablated along two axes -- pretraining population and downstream adaptation.<n>We compare a narrow-source EU/US corpus against a geographically diverse pool augmented with multi-center South Asian clinical recordings.<n> PRISM matches or outperforms REVE (92 datasets, 60,000+ hours) on the majority of tasks.
arXiv Detail & Related papers (2026-02-28T19:50:28Z) - mixEEG: Enhancing EEG Federated Learning for Cross-subject EEG Classification with Tailored mixup [5.367329958716485]
Cross-subject electroencephalography (EEG) classification exhibits great challenges due to the diversity of cognitive processes and physiological structures between different subjects.
Privacy concerns associated with EEG pose significant limitations to data sharing between different hospitals and institutions.
Federated learning (FL) enables multiple decentralized clients to collaboratively train a global model without direct communication of raw data.
arXiv Detail & Related papers (2025-04-07T06:24:23Z) - Few-shot learning for COVID-19 Chest X-Ray Classification with
Imbalanced Data: An Inter vs. Intra Domain Study [49.5374512525016]
Medical image datasets are essential for training models used in computer-aided diagnosis, treatment planning, and medical research.
Some challenges are associated with these datasets, including variability in data distribution, data scarcity, and transfer learning issues when using models pre-trained from generic images.
We propose a methodology based on Siamese neural networks in which a series of techniques are integrated to mitigate the effects of data scarcity and distribution imbalance.
arXiv Detail & Related papers (2024-01-18T16:59:27Z) - Role of Structural and Conformational Diversity for Machine Learning
Potentials [4.608732256350959]
We investigate the relationship between data biases and model generalization in Quantum Mechanics.
Our results reveal nuanced patterns in generalization metrics.
These findings provide valuable insights and guidelines for QM data generation efforts.
arXiv Detail & Related papers (2023-10-30T19:33:12Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - DA-VEGAN: Differentiably Augmenting VAE-GAN for microstructure
reconstruction from extremely small data sets [110.60233593474796]
DA-VEGAN is a model with two central innovations.
A $beta$-variational autoencoder is incorporated into a hybrid GAN architecture.
A custom differentiable data augmentation scheme is developed specifically for this architecture.
arXiv Detail & Related papers (2023-02-17T08:49:09Z) - Using Mixed-Effect Models to Learn Bayesian Networks from Related Data
Sets [0.04297070083645048]
We provide an analogous solution for learning a Bayesian network from continuous data using mixed-effects models.
We study its structural, parametric, predictive and classification accuracy.
The improvement is marked for low sample sizes and for unbalanced data sets.
arXiv Detail & Related papers (2022-06-08T08:32:32Z) - Correlator Convolutional Neural Networks: An Interpretable Architecture
for Image-like Quantum Matter Data [15.283214387433082]
We develop a network architecture that discovers features in the data which are directly interpretable in terms of physical observables.
Our approach lends itself well to the construction of simple, end-to-end interpretable architectures.
arXiv Detail & Related papers (2020-11-06T17:04:10Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Repulsive Mixture Models of Exponential Family PCA for Clustering [127.90219303669006]
The mixture extension of exponential family principal component analysis ( EPCA) was designed to encode much more structural information about data distribution than the traditional EPCA.
The traditional mixture of local EPCAs has the problem of model redundancy, i.e., overlaps among mixing components, which may cause ambiguity for data clustering.
In this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
arXiv Detail & Related papers (2020-04-07T04:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.