Automated segmentation of microtomography imaging of Egyptian mummies
- URL: http://arxiv.org/abs/2105.06738v1
- Date: Fri, 14 May 2021 09:56:13 GMT
- Title: Automated segmentation of microtomography imaging of Egyptian mummies
- Authors: Marc Tanti, Camille Berruyer, Paul Tafforeau, Adrian Muscat, Reuben
Farrugia, Kenneth Scerri, Gianluca Valentino, V. Armando Sol\'e and Johann A.
Briffa
- Abstract summary: We develop a tool to automatically segment images using manually segmented samples to tune and train a machine learning model.
For a set of four specimens of ancient Egyptian animal mummies we achieve an overall accuracy of 94-98% when compared with manually segmented slices.
A qualitative analysis of the segmented output shows that our results are close in term of usability to those from deep learning, justifying the use of these techniques.
- Score: 3.8328962782003964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Propagation Phase Contrast Synchrotron Microtomography (PPC-SR${\mu}$CT) is
the gold standard for non-invasive and non-destructive access to internal
structures of archaeological remains. In this analysis, the virtual specimen
needs to be segmented to separate different parts or materials, a process that
normally requires considerable human effort. In the Automated SEgmentation of
Microtomography Imaging (ASEMI) project, we developed a tool to automatically
segment these volumetric images, using manually segmented samples to tune and
train a machine learning model. For a set of four specimens of ancient Egyptian
animal mummies we achieve an overall accuracy of 94-98% when compared with
manually segmented slices, approaching the results of off-the-shelf commercial
software using deep learning (97-99%) at much lower complexity. A qualitative
analysis of the segmented output shows that our results are close in term of
usability to those from deep learning, justifying the use of these techniques.
Related papers
- MatSAM: Efficient Extraction of Microstructures of Materials via Visual
Large Model [11.130574172301365]
Segment Anything Model (SAM) is a large visual model with powerful deep feature representation and zero-shot generalization capabilities.
In this paper, we propose MatSAM, a general and efficient microstructure extraction solution based on SAM.
A simple yet effective point-based prompt generation strategy is designed, grounded on the distribution and shape of microstructures.
arXiv Detail & Related papers (2024-01-11T03:18:18Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Automated Grain Boundary (GB) Segmentation and Microstructural Analysis
in 347H Stainless Steel Using Deep Learning and Multimodal Microscopy [2.0445155106382797]
Austenitic 347H stainless steel offers superior mechanical properties and corrosion resistance required for extreme operating conditions.
CNN based deep-learning models is a powerful technique to detect features from material micrographs in an automated manner.
We combine scanning electron microscopy (SEM) images of 347H stainless steel as training data and electron backscatter diffraction (EBSD) micrographs as pixel-wise labels for grain boundary detection.
arXiv Detail & Related papers (2023-05-12T22:49:36Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Deep Learning Segmentation of Complex Features in Atomic-Resolution
Phase Contrast Transmission Electron Microscopy Images [0.8049701904919516]
It is difficult to develop fully-automated analysis routines for phase contrast TEM studies using conventional image processing tools.
For automated analysis of large sample regions of graphene, one of the key problems is segmentation between the structure of interest and unwanted structures.
We show that the deep learning method is more general, simpler to apply in practice, and produces more accurate and robust results than the conventional algorithm.
arXiv Detail & Related papers (2020-12-09T21:17:34Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.