Advancing Extrapolative Predictions of Material Properties through Learning to Learn
- URL: http://arxiv.org/abs/2404.08657v1
- Date: Mon, 25 Mar 2024 09:30:19 GMT
- Title: Advancing Extrapolative Predictions of Material Properties through Learning to Learn
- Authors: Kohei Noda, Araki Wakiuchi, Yoshihiro Hayashi, Ryo Yoshida,
- Abstract summary: We use attention-based architecture of neural networks and meta-learning algorithms to acquire extrapolative generalization capability.
We highlight the potential of such extrapolatively trained models, particularly with their ability to rapidly adapt to unseen material domains.
- Score: 1.3274508420845539
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in machine learning have showcased its potential to significantly accelerate the discovery of new materials. Central to this progress is the development of rapidly computable property predictors, enabling the identification of novel materials with desired properties from vast material spaces. However, the limited availability of data resources poses a significant challenge in data-driven materials research, particularly hindering the exploration of innovative materials beyond the boundaries of existing data. While machine learning predictors are inherently interpolative, establishing a general methodology to create an extrapolative predictor remains a fundamental challenge, limiting the search for innovative materials beyond existing data boundaries. In this study, we leverage an attention-based architecture of neural networks and meta-learning algorithms to acquire extrapolative generalization capability. The meta-learners, experienced repeatedly with arbitrarily generated extrapolative tasks, can acquire outstanding generalization capability in unexplored material spaces. Through the tasks of predicting the physical properties of polymeric materials and hybrid organic--inorganic perovskites, we highlight the potential of such extrapolatively trained models, particularly with their ability to rapidly adapt to unseen material domains in transfer learning scenarios.
Related papers
- Foundation Model for Composite Materials and Microstructural Analysis [49.1574468325115]
We present a foundation model specifically designed for composite materials.
Our model is pre-trained on a dataset of short-fiber composites to learn robust latent features.
During transfer learning, the MMAE accurately predicts homogenized stiffness, with an R2 score reaching as high as 0.959 and consistently exceeding 0.91, even when trained on limited data.
arXiv Detail & Related papers (2024-11-10T19:06:25Z) - Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - Materials Expert-Artificial Intelligence for Materials Discovery [39.67752644916519]
We introduce "Materials Expert-Artificial Intelligence" (ME-AI) to encapsulate and articulate this human intuition.
The ME-AI learned descriptors independently reproduce expert intuition and expand upon it.
Our success points to the "machine bottling human insight" approach as promising for machine learning-aided material discovery.
arXiv Detail & Related papers (2023-12-05T14:29:18Z) - Multimodal Learning for Materials [7.167520424757711]
We introduce Multimodal Learning for Materials (MultiMat), which enables self-supervised multi-modality training of foundation models for materials.
We demonstrate our framework's potential using data from the Materials Project database on multiple axes.
arXiv Detail & Related papers (2023-11-30T18:35:29Z) - Is Self-Supervised Pretraining Good for Extrapolation in Molecular
Property Prediction? [16.211138511816642]
In material science, the prediction of unobserved values, commonly referred to as extrapolation, is critical for property prediction.
We propose an experimental framework for the demonstration and empirically reveal that while models were unable to accurately extrapolate absolute property values, self-supervised pretraining enables them to learn relative tendencies of unobserved property values.
arXiv Detail & Related papers (2023-08-16T03:38:43Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - Audacity of huge: overcoming challenges of data scarcity and data
quality for machine learning in computational materials discovery [1.0036312061637764]
Machine learning (ML)-accelerated discovery requires large amounts of high-fidelity data to reveal predictive structure-property relationships.
For many properties of interest in materials discovery, the challenging nature and high cost of data generation has resulted in a data landscape that is scarcely populated and of dubious quality.
In the absence of manual curation, increasingly sophisticated natural language processing and automated image analysis are making it possible to learn structure-property relationships from the literature.
arXiv Detail & Related papers (2021-11-02T21:43:58Z) - Machine Learning in Nano-Scale Biomedical Engineering [77.75587007080894]
We review the existing research regarding the use of machine learning in nano-scale biomedical engineering.
The main challenges that can be formulated as ML problems are classified into the three main categories.
For each of the presented methodologies, special emphasis is given to its principles, applications, and limitations.
arXiv Detail & Related papers (2020-08-05T15:45:54Z) - Improving neural network predictions of material properties with limited
data using transfer learning [3.2851683371946754]
We develop new transfer learning algorithms to accelerate prediction of material properties from ab initio simulations.
Transfer learning has been successfully utilized for data-efficient modeling in applications other than materials science.
arXiv Detail & Related papers (2020-06-29T22:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.