Digital Fingerprinting of Microstructures
- URL: http://arxiv.org/abs/2203.13718v2
- Date: Mon, 22 Jan 2024 12:47:52 GMT
- Title: Digital Fingerprinting of Microstructures
- Authors: Michael D. White, Alexander Tarakanov, Christopher P. Race, Philip J.
Withers, Kody J.H. Law
- Abstract summary: Finding efficient means of fingerprinting microstructural information is a critical step towards harnessing data-centric machine learning approaches.
Here, we consider microstructure classification and utilise the resulting features over a range of related machine learning tasks.
In particular, methods that leverage transfer learning with convolutional neural networks (CNNs), pretrained on the ImageNet dataset, are generally shown to outperform other methods.
- Score: 44.139970905896504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding efficient means of fingerprinting microstructural information is a
critical step towards harnessing data-centric machine learning approaches. A
statistical framework is systematically developed for compressed
characterisation of a population of images, which includes some classical
computer vision methods as special cases. The focus is on materials
microstructure. The ultimate purpose is to rapidly fingerprint sample images in
the context of various high-throughput design/make/test scenarios. This
includes, but is not limited to, quantification of the disparity between
microstructures for quality control, classifying microstructures, predicting
materials properties from image data and identifying potential processing
routes to engineer new materials with specific properties. Here, we consider
microstructure classification and utilise the resulting features over a range
of related machine learning tasks, namely supervised, semi-supervised, and
unsupervised learning.
The approach is applied to two distinct datasets to illustrate various
aspects and some recommendations are made based on the findings. In particular,
methods that leverage transfer learning with convolutional neural networks
(CNNs), pretrained on the ImageNet dataset, are generally shown to outperform
other methods. Additionally, dimensionality reduction of these CNN-based
fingerprints is shown to have negligible impact on classification accuracy for
the supervised learning approaches considered. In situations where there is a
large dataset with only a handful of images labelled, graph-based label
propagation to unlabelled data is shown to be favourable over discarding
unlabelled data and performing supervised learning. In particular, label
propagation by Poisson learning is shown to be highly effective at low label
rates.
Related papers
- Leveraging Internal Representations of Model for Magnetic Image
Classification [0.13654846342364302]
This paper introduces a potentially groundbreaking paradigm for machine learning model training, specifically designed for scenarios with only a single magnetic image and its corresponding label image available.
We harness the capabilities of Deep Learning to generate concise yet informative samples, aiming to overcome data scarcity.
arXiv Detail & Related papers (2024-03-11T15:15:50Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Unsupervised machine learning via transfer learning and k-means
clustering to classify materials image data [0.0]
This paper demonstrates how to construct, use, and evaluate a high performance unsupervised machine learning system for classifying images.
We use the VGG16 convolutional neural network pre-trained on the ImageNet dataset of natural images to extract feature representations for each micrograph.
The approach achieves $99.4% pm 0.16%$ accuracy, and the resulting model can be used to classify new images without retraining.
arXiv Detail & Related papers (2020-07-16T14:36:04Z) - Semi-supervised Learning with a Teacher-student Network for Generalized
Attribute Prediction [7.462336024223667]
This paper presents a study on semi-supervised learning to solve the visual attribute prediction problem.
Our method achieves competitive performance on various benchmarks for fashion attribute prediction.
arXiv Detail & Related papers (2020-07-14T02:06:24Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.