On the Robustness of Machine Learning Models in Predicting Thermodynamic Properties: a Case of Searching for New Quasicrystal Approximants
- URL: http://arxiv.org/abs/2410.13873v2
- Date: Thu, 07 Nov 2024 14:50:06 GMT
- Title: On the Robustness of Machine Learning Models in Predicting Thermodynamic Properties: a Case of Searching for New Quasicrystal Approximants
- Authors: Fedor S. Avilov, Roman A. Eremin, Semen A. Budennyy, Innokentiy S. Humonen,
- Abstract summary: In this work we composed a series of nested intermetallic approximants of quasicrystals datasets and trained various machine learning models on them.
Our qualitative and, what is more important, quantitative assessment of the difference in the predictions clearly shows that different reasonable changes in the training sample can lead to the completely different set of the predicted potentially new materials.
We also showed the advantage of pre-training and proposed a simple yet effective trick of sequential training to increase stability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite an artificial intelligence-assisted modeling of disordered crystals is a widely used and well-tried method of new materials design, the issues of its robustness, reliability, and stability are still not resolved and even not discussed enough. To highlight it, in this work we composed a series of nested intermetallic approximants of quasicrystals datasets and trained various machine learning models on them correspondingly. Our qualitative and, what is more important, quantitative assessment of the difference in the predictions clearly shows that different reasonable changes in the training sample can lead to the completely different set of the predicted potentially new materials. We also showed the advantage of pre-training and proposed a simple yet effective trick of sequential training to increase stability.
Related papers
- CrystalFormer-RL: Reinforcement Fine-Tuning for Materials Design [2.290956583394892]
We explore the applications of reinforcement fine-tuning to the autoregressive transformer-based materials generative model CrystalFormer.
By optimizing reward signals, fine-tuning infuses knowledge from discriminative models into generative models.
The resulting model, CrystalFormer-RL, shows enhanced stability in generated crystals and successfully discovers crystals with desirable yet conflicting material properties.
arXiv Detail & Related papers (2025-04-03T07:59:30Z) - Large Language Models Are Innate Crystal Structure Generators [30.44669215588058]
We show that pre-trained Large Language Models can inherently generate stable crystal structures without additional training.
Our framework MatLLMSearch integrates pre-trained LLMs with evolutionary search algorithms, achieving a 78.38% metastable rate.
arXiv Detail & Related papers (2025-02-28T10:41:16Z) - Predictive Modeling and Uncertainty Quantification of Fatigue Life in Metal Alloys using Machine Learning [39.58317527488534]
This study introduces a novel approach for quantification uncertainty in fatigue life prediction of metal materials.
The proposed approach employs physics-based input features estimated using the Basquin fatigue model.
The synergy between physics-based models and data-driven models enhances the consistency in predicted values.
arXiv Detail & Related papers (2025-01-25T03:43:19Z) - Foundation Model for Composite Materials and Microstructural Analysis [49.1574468325115]
We present a foundation model specifically designed for composite materials.
Our model is pre-trained on a dataset of short-fiber composites to learn robust latent features.
During transfer learning, the MMAE accurately predicts homogenized stiffness, with an R2 score reaching as high as 0.959 and consistently exceeding 0.91, even when trained on limited data.
arXiv Detail & Related papers (2024-11-10T19:06:25Z) - Scalable Diffusion for Materials Generation [99.71001883652211]
We develop a unified crystal representation that can represent any crystal structure (UniMat)
UniMat can generate high fidelity crystal structures from larger and more complex chemical systems.
We propose additional metrics for evaluating generative models of materials.
arXiv Detail & Related papers (2023-10-18T15:49:39Z) - Data-Driven Score-Based Models for Generating Stable Structures with
Adaptive Crystal Cells [1.515687944002438]
This work aims at the generation of new crystal structures with desired properties, such as chemical stability and specified chemical composition.
The novelty of the presented approach resides in the fact that the lattice of the crystal cell is not fixed.
A multigraph crystal representation is introduced that respects symmetry constraints, yielding computational advantages.
arXiv Detail & Related papers (2023-10-16T02:53:24Z) - On the Stability-Plasticity Dilemma of Class-Incremental Learning [50.863180812727244]
A primary goal of class-incremental learning is to strike a balance between stability and plasticity.
This paper aims to shed light on how effectively recent class-incremental learning algorithms address the stability-plasticity trade-off.
arXiv Detail & Related papers (2023-04-04T09:34:14Z) - Rank-Minimizing and Structured Model Inference [7.067529286680843]
This work introduces a method that infers models from data with physical insights encoded in the form of structure.
The proposed method numerically solves the equations for minimal-rank solutions and so obtains models of low order.
Numerical experiments demonstrate that the combination of structure preservation and rank leads to accurate models with orders of magnitude fewer degrees of freedom than models of comparable prediction quality.
arXiv Detail & Related papers (2023-02-19T09:46:35Z) - Uncertainty-aware Mixed-variable Machine Learning for Materials Design [9.259285449415676]
We survey frequentist and Bayesian approaches to uncertainty quantification of machine learning with mixed variables.
We examine the efficacy of the two models in the optimization of mathematical functions, as well as properties of structural and functional materials.
Our results provide practical guidance on choosing between frequentist and Bayesian uncertainty-aware machine learning models for mixed-variable BO in materials design.
arXiv Detail & Related papers (2022-07-11T16:37:17Z) - Model Stability with Continuous Data Updates [2.439909645714735]
We study the "stability" of machine learning (ML) models within the context of larger, complex NLP systems.
We find that model design choices, including network architecture and input representation, have a critical impact on stability.
We recommend ML model designers account for trade-offs in accuracy and jitter when making modeling choices.
arXiv Detail & Related papers (2022-01-14T22:11:16Z) - How to See Hidden Patterns in Metamaterials with Interpretable Machine
Learning [82.67551367327634]
We develop a new interpretable, multi-resolution machine learning framework for finding patterns in the unit-cells of materials.
Specifically, we propose two new interpretable representations of metamaterials, called shape-frequency features and unit-cell templates.
arXiv Detail & Related papers (2021-11-10T21:19:02Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z) - Predictive modeling approaches in laser-based material processing [59.04160452043105]
This study aims to automate and forecast the effect of laser processing on material structures.
The focus is centred on the performance of representative statistical and machine learning algorithms.
Results can set the basis for a systematic methodology towards reducing material design, testing and production cost.
arXiv Detail & Related papers (2020-06-13T17:28:52Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.