Revealing the Evolution of Order in Materials Microstructures Using Multi-Modal Computer Vision
- URL: http://arxiv.org/abs/2411.09896v1
- Date: Fri, 15 Nov 2024 02:44:32 GMT
- Title: Revealing the Evolution of Order in Materials Microstructures Using Multi-Modal Computer Vision
- Authors: Arman Ter-Petrosyan, Michael Holden, Jenna A. Bilbrey, Sarah Akers, Christina Doty, Kayla H. Yano, Le Wang, Rajendra Paudel, Eric Lang, Khalid Hattar, Ryan B. Comes, Yingge Du, Bethany E. Matthews, Steven R. Spurgeon,
- Abstract summary: Development of high-performance materials for microelectronics depends on our ability to describe and direct property-defining microstructural order.
Here, we demonstrate a multi-modal machine learning (ML) approach to describe order from electron microscopy analysis of the complex oxide La$_1-x$Sr$_x$FeO$_3$.
We observe distinct differences in the performance of uni- and multi-modal models, from which we draw general lessons in describing crystal order using computer vision.
- Score: 4.6481041987538365
- License:
- Abstract: The development of high-performance materials for microelectronics, energy storage, and extreme environments depends on our ability to describe and direct property-defining microstructural order. Our present understanding is typically derived from laborious manual analysis of imaging and spectroscopy data, which is difficult to scale, challenging to reproduce, and lacks the ability to reveal latent associations needed for mechanistic models. Here, we demonstrate a multi-modal machine learning (ML) approach to describe order from electron microscopy analysis of the complex oxide La$_{1-x}$Sr$_x$FeO$_3$. We construct a hybrid pipeline based on fully and semi-supervised classification, allowing us to evaluate both the characteristics of each data modality and the value each modality adds to the ensemble. We observe distinct differences in the performance of uni- and multi-modal models, from which we draw general lessons in describing crystal order using computer vision.
Related papers
- Foundational Model for Electron Micrograph Analysis: Instruction-Tuning Small-Scale Language-and-Vision Assistant for Enterprise Adoption [0.0]
We introduce a small-scale framework for analyzing semiconductor electron microscopy images (MAEMI)
We generate a customized instruction-following dataset using large multimodal models on microscopic image analysis.
We perform knowledge transfer from larger to smaller models through knowledge distillation, resulting in improved accuracy of smaller models on visual question answering tasks.
arXiv Detail & Related papers (2024-08-23T17:42:11Z) - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [56.391404083287235]
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach.
Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations.
We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.
arXiv Detail & Related papers (2024-06-24T17:59:42Z) - MatSAM: Efficient Extraction of Microstructures of Materials via Visual
Large Model [11.130574172301365]
Segment Anything Model (SAM) is a large visual model with powerful deep feature representation and zero-shot generalization capabilities.
In this paper, we propose MatSAM, a general and efficient microstructure extraction solution based on SAM.
A simple yet effective point-based prompt generation strategy is designed, grounded on the distribution and shape of microstructures.
arXiv Detail & Related papers (2024-01-11T03:18:18Z) - StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized
Image-Dialogue Data [129.92449761766025]
We propose a novel data collection methodology that synchronously synthesizes images and dialogues for visual instruction tuning.
This approach harnesses the power of generative models, marrying the abilities of ChatGPT and text-to-image generative models.
Our research includes comprehensive experiments conducted on various datasets.
arXiv Detail & Related papers (2023-08-20T12:43:52Z) - MeLM, a generative pretrained language modeling framework that solves
forward and inverse mechanics problems [0.0]
We report a flexible multi-modal mechanics language model, MeLM, applied to solve various nonlinear forward and inverse problems.
The framework is applied to various examples including bio-inspired hierarchical honeycomb design and carbon nanotube mechanics.
arXiv Detail & Related papers (2023-06-30T10:28:20Z) - Parameters, Properties, and Process: Conditional Neural Generation of
Realistic SEM Imagery Towards ML-assisted Advanced Manufacturing [1.5234614694413722]
We build upon prior work by applying conditional generative adversarial networks (GANs) to scanning electron microscope (SEM) imagery.
We generate realistic images conditioned on temper and either experimental parameters or material properties.
This work forms a technical backbone for a fundamentally new approach for understanding manufacturing processes.
arXiv Detail & Related papers (2023-01-13T00:48:39Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Three-dimensional microstructure generation using generative adversarial
neural networks in the context of continuum micromechanics [77.34726150561087]
This work proposes a generative adversarial network tailored towards three-dimensional microstructure generation.
The lightweight algorithm is able to learn the underlying properties of the material from a single microCT-scan without the need of explicit descriptors.
arXiv Detail & Related papers (2022-05-31T13:26:51Z) - How to See Hidden Patterns in Metamaterials with Interpretable Machine
Learning [82.67551367327634]
We develop a new interpretable, multi-resolution machine learning framework for finding patterns in the unit-cells of materials.
Specifically, we propose two new interpretable representations of metamaterials, called shape-frequency features and unit-cell templates.
arXiv Detail & Related papers (2021-11-10T21:19:02Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Intelligent multiscale simulation based on process-guided composite
database [0.0]
We present an integrated data-driven modeling framework based on process modeling, material homogenization, and machine learning.
We are interested in the injection-molded short fiber reinforced composites, which have been identified as key material systems in automotive, aerospace, and electronics industries.
arXiv Detail & Related papers (2020-03-20T20:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.