Automatic identification of fossils and abiotic grains during carbonate
microfacies analysis using deep convolutional neural networks
- URL: http://arxiv.org/abs/2009.11429v2
- Date: Wed, 4 Nov 2020 02:04:02 GMT
- Title: Automatic identification of fossils and abiotic grains during carbonate
microfacies analysis using deep convolutional neural networks
- Authors: Xiaokang Liu, Haijun Song
- Abstract summary: Petrographic analysis based on microfacies identification in thin sections is widely used in sedimentary environment interpretation and paleoecological reconstruction.
Distinguishing the morphological and microstructural diversity of skeletal fragments requires extensive prior knowledge of fossil morphotypes in microfacies.
- Score: 1.520387509697271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Petrographic analysis based on microfacies identification in thin sections is
widely used in sedimentary environment interpretation and paleoecological
reconstruction. Fossil recognition from microfacies is an essential procedure
for petrographers to complete this task. Distinguishing the morphological and
microstructural diversity of skeletal fragments requires extensive prior
knowledge of fossil morphotypes in microfacies and long training sessions under
the microscope. This requirement engenders certain challenges for
sedimentologists and paleontologists, especially novices. However, a machine
classifier can help address this challenge. In this study, we collected a
microfacies image dataset comprising both public data from 1,149 references and
our own materials (including 30,815 images of 22 fossil and abiotic grain
groups). We employed a high-performance workstation to implement four classic
deep convolutional neural networks (DCNNs), which have proven to be highly
efficient in computer vision over the last several years. Our framework uses a
transfer learning technique, which reuses the pre-trained parameters that are
trained on a larger ImageNet dataset as initialization for the network to
achieve high accuracy with low computing costs. We obtained up to 95% of the
top one and 99% of the top three test accuracies in the Inception ResNet v2
architecture. The machine classifier exhibited 0.99 precision on minerals, such
as dolomite and pyrite. Although it had some difficulty on samples having
similar morphologies, such as the bivalve, brachiopod, and ostracod, it
nevertheless obtained 0.88 precision. Our machine learning framework
demonstrated high accuracy with reproducibility and bias avoidance that was
comparable to those of human classifiers. Its application can thus eliminate
much of the tedious, manually intensive efforts by human experts conducting
routine identification.
Related papers
- Application of Artificial Intelligence in the Classification of
Microscopical Starch Images for Drug Formulation [0.0]
Starches are important energy sources found in plants with many uses in the pharmaceutical industry.
In this work, we applied artificial intelligence techniques (using transfer learning and deep convolution neural network CNNs) to microscopical images obtained from 9 starch samples of different botanical sources.
arXiv Detail & Related papers (2023-05-09T10:16:02Z) - ForamViT-GAN: Exploring New Paradigms in Deep Learning for
Micropaleontological Image Analysis [0.0]
We propose a novel deep learning workflow combining hierarchical vision transformers with style-based generative adversarial network algorithms.
Our study shows that this workflow can generate high-resolution images with a high signal-to-noise ratio (39.1 dB) and realistic synthetic images with a Frechet distance similarity score of 14.88.
For the first time, we performed few-shot semantic segmentation of different foraminifera chambers on both generated and synthetic images with high accuracy.
arXiv Detail & Related papers (2023-04-09T18:49:38Z) - Fast spline detection in high density microscopy data [0.0]
In microscopy studies of multi-organism systems, the problem of collision and overlap remains challenging.
Here, we develop a novel end-to-end deep learning approach to extract precise shape trajectories of generally motile and overlapping splines.
We present it in the setting of and exemplify its usability on dense experiments of crawling Caenorhabditis elegans.
arXiv Detail & Related papers (2023-01-11T13:40:05Z) - Three-dimensional microstructure generation using generative adversarial
neural networks in the context of continuum micromechanics [77.34726150561087]
This work proposes a generative adversarial network tailored towards three-dimensional microstructure generation.
The lightweight algorithm is able to learn the underlying properties of the material from a single microCT-scan without the need of explicit descriptors.
arXiv Detail & Related papers (2022-05-31T13:26:51Z) - AI for Porosity and Permeability Prediction from Geologic Core X-Ray
Micro-Tomography [0.0]
We propose to use self-supervised pretraining of the very small CNN-transformer-based model to predict the physical properties of the rocks.
We show that this technique prevents overfitting even for extremely small datasets.
arXiv Detail & Related papers (2022-05-26T06:55:03Z) - MiNet: A Convolutional Neural Network for Identifying and Categorising
Minerals [0.0]
We develop a single-label image classification model to identify and categorise seven classes of minerals.
Experiments conducted using real-world datasets show that the model achieves an accuracy of 90.75%.
arXiv Detail & Related papers (2021-11-22T15:00:28Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Deep neural networks approach to microbial colony detection -- a
comparative analysis [52.77024349608834]
This study investigates the performance of three deep learning approaches for object detection on the AGAR dataset.
The achieved results may serve as a benchmark for future experiments.
arXiv Detail & Related papers (2021-08-23T12:06:00Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.