Identification and classification of exfoliated graphene flakes from
microscopy images using a hierarchical deep convolutional neural network
- URL: http://arxiv.org/abs/2203.15252v1
- Date: Tue, 29 Mar 2022 05:54:06 GMT
- Title: Identification and classification of exfoliated graphene flakes from
microscopy images using a hierarchical deep convolutional neural network
- Authors: Soroush Mahjoubi, Fan Ye, Yi Bao, Weina Meng, Xian Zhang
- Abstract summary: This paper presents a deep learning method to automatically identify and classify the thickness of exfoliated graphene flakes on Si/SiO2 substrates.
The presented method uses a hierarchical deep convolutional neural network that is capable of learning new images while preserving the knowledge from previous images.
The results indicated that our deep learning model has accuracy as high as 99% in identifying and classifying exfoliated graphene flakes.
- Score: 4.464084686836888
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Identification of the mechanically exfoliated graphene flakes and
classification of the thickness is important in the nanomanufacturing of
next-generation materials and devices that overcome the bottleneck of Moore's
Law. Currently, identification and classification of exfoliated graphene flakes
are conducted by human via inspecting the optical microscope images. The
existing state-of-the-art automatic identification by machine learning is not
able to accommodate images with different backgrounds while different
backgrounds are unavoidable in experiments. This paper presents a deep learning
method to automatically identify and classify the thickness of exfoliated
graphene flakes on Si/SiO2 substrates from optical microscope images with
various settings and background colors. The presented method uses a
hierarchical deep convolutional neural network that is capable of learning new
images while preserving the knowledge from previous images. The deep learning
model was trained and used to classify exfoliated graphene flakes into
monolayer (1L), bi-layer (2L), tri-layer (3L), four-to-six-layer (4-6L),
seven-to-ten-layer (7-10L), and bulk categories. Compared with existing machine
learning methods, the presented method possesses high accuracy and efficiency
as well as robustness to the backgrounds and resolutions of images. The results
indicated that our deep learning model has accuracy as high as 99% in
identifying and classifying exfoliated graphene flakes. This research will shed
light on scaled-up manufacturing and characterization of graphene for advanced
materials and devices.
Related papers
- MaskTerial: A Foundation Model for Automated 2D Material Flake Detection [48.73213960205105]
We present a deep learning model, called MaskTerial, that uses an instance segmentation network to reliably identify 2D material flakes.
The model is extensively pre-trained using a synthetic data generator, that generates realistic microscopy images from unlabeled data.
We demonstrate significant improvements over existing techniques in the detection of low-contrast materials such as hexagonal boron nitride.
arXiv Detail & Related papers (2024-12-12T15:01:39Z) - Mew: Multiplexed Immunofluorescence Image Analysis through an Efficient Multiplex Network [84.88767228835928]
We introduce Mew, a novel framework designed to efficiently process mIF images through the lens of multiplex network.
Mew innovatively constructs a multiplex network comprising two distinct layers: a Voronoi network for geometric information and a Cell-type network for capturing cell-wise homogeneity.
This framework equips a scalable and efficient Graph Neural Network (GNN), capable of processing the entire graph during training.
arXiv Detail & Related papers (2024-07-25T08:22:30Z) - High-Throughput Phenotyping using Computer Vision and Machine Learning [0.0]
We used a dataset provided by Oak Ridge National Laboratory with 1,672 images of Populus Trichocarpa with white labels displaying treatment.
Optical character recognition (OCR) was used to read these labels on the plants.
Machine learning models were used to predict treatment based on those classifications, and analyzed encoded EXIF tags were used for the purpose of finding leaf size and correlations between phenotypes.
arXiv Detail & Related papers (2024-07-08T19:46:31Z) - Phenotype-preserving metric design for high-content image reconstruction
by generative inpainting [0.0]
We evaluate the state-of-the-art inpainting methods for image restoration in a high-content fluorescence microscopy dataset of cultured cells.
We show that architectures like DeepFill V2 and Edge Connect can faithfully restore microscopy images upon fine-tuning with relatively little data.
arXiv Detail & Related papers (2023-07-26T18:13:16Z) - Application of Artificial Intelligence in the Classification of
Microscopical Starch Images for Drug Formulation [0.0]
Starches are important energy sources found in plants with many uses in the pharmaceutical industry.
In this work, we applied artificial intelligence techniques (using transfer learning and deep convolution neural network CNNs) to microscopical images obtained from 9 starch samples of different botanical sources.
arXiv Detail & Related papers (2023-05-09T10:16:02Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Automated Classification of Nanoparticles with Various Ultrastructures
and Sizes [0.6927055673104933]
We present a deep-learning based method for nanoparticles measurement and classification trained from a small data set of scanning transmission electron microscopy images.
Our approach is comprised of two stages: localization, i.e., detection of nanoparticles, and classification, i.e., categorization of their ultrastructure.
We show how the generation of synthetic images, either using image processing or using various image generation neural networks, can be used to improve the results in both stages.
arXiv Detail & Related papers (2022-07-28T11:31:43Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.