Generalization Across Experimental Parameters in Machine Learning
Analysis of High Resolution Transmission Electron Microscopy Datasets
- URL: http://arxiv.org/abs/2306.11853v1
- Date: Tue, 20 Jun 2023 19:13:49 GMT
- Title: Generalization Across Experimental Parameters in Machine Learning
Analysis of High Resolution Transmission Electron Microscopy Datasets
- Authors: Katherine Sytwu, Luis Rangel DaCosta, Mary C. Scott
- Abstract summary: We train and validate neural networks across curated, experimentally-collected high-resolution TEM image datasets of nanoparticles.
We find that our neural networks are not robust across microscope parameters, but do generalize across certain sample parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are promising tools for high-throughput and accurate
transmission electron microscopy (TEM) analysis of nanomaterials, but are known
to generalize poorly on data that is "out-of-distribution" from their training
data. Given the limited set of image features typically seen in high-resolution
TEM imaging, it is unclear which images are considered out-of-distribution from
others. Here, we investigate how the choice of metadata features in the
training dataset influences neural network performance, focusing on the example
task of nanoparticle segmentation. We train and validate neural networks across
curated, experimentally-collected high-resolution TEM image datasets of
nanoparticles under controlled imaging and material parameters, including
magnification, dosage, nanoparticle diameter, and nanoparticle material.
Overall, we find that our neural networks are not robust across microscope
parameters, but do generalize across certain sample parameters. Additionally,
data preprocessing heavily influences the generalizability of neural networks
trained on nominally similar datasets. Our results highlight the need to
understand how dataset features affect deployment of data-driven algorithms.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - A robust synthetic data generation framework for machine learning in
High-Resolution Transmission Electron Microscopy (HRTEM) [1.0923877073891446]
Construction Zone is a Python package for rapidly generating complex nanoscale atomic structures.
We develop an end-to-end workflow for creating large simulated databases for training neural networks.
Using our results, we are able to achieve state-of-the-art segmentation performance on experimental HRTEM images of nanoparticles.
arXiv Detail & Related papers (2023-09-12T10:44:15Z) - Spherical convolutional neural networks can improve brain microstructure
estimation from diffusion MRI data [0.35998666903987897]
Diffusion magnetic resonance imaging is sensitive to the microstructural properties of brain tissue.
Estimate clinically and scientifically relevant microstructural properties from the measured signals remains a highly challenging inverse problem that machine learning may help solve.
We trained a spherical convolutional neural network to predict the ground-truth parameter values from efficiently simulated noisy data.
arXiv Detail & Related papers (2022-11-17T20:52:00Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Automated Classification of Nanoparticles with Various Ultrastructures
and Sizes [0.6927055673104933]
We present a deep-learning based method for nanoparticles measurement and classification trained from a small data set of scanning transmission electron microscopy images.
Our approach is comprised of two stages: localization, i.e., detection of nanoparticles, and classification, i.e., categorization of their ultrastructure.
We show how the generation of synthetic images, either using image processing or using various image generation neural networks, can be used to improve the results in both stages.
arXiv Detail & Related papers (2022-07-28T11:31:43Z) - Understanding the Influence of Receptive Field and Network Complexity in
Neural-Network-Guided TEM Image Analysis [0.0]
We systematically examine how neural network architecture choices affect how neural networks segment in transmission electron microscopy (TEM) images.
We find that for low-resolution TEM images which rely on amplitude contrast to distinguish nanoparticles from background, the receptive field does not significantly influence segmentation performance.
On the other hand, for high-resolution TEM images which rely on a combination of amplitude and phase contrast changes to identify nanoparticles, receptive field is a key parameter for increased performance.
arXiv Detail & Related papers (2022-04-08T18:45:15Z) - Imaging Conductivity from Current Density Magnitude using Neural
Networks [1.8692254863855962]
We develop a neural network based reconstruction technique for imaging the conductivity from the magnitude of the internal current density.
It is observed that the approach enjoys remarkable robustness with respect to the presence of data noise.
arXiv Detail & Related papers (2022-04-05T18:31:03Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Spectral Complexity-scaled Generalization Bound of Complex-valued Neural
Networks [78.64167379726163]
This paper is the first work that proves a generalization bound for the complex-valued neural network.
We conduct experiments by training complex-valued convolutional neural networks on different datasets.
arXiv Detail & Related papers (2021-12-07T03:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.