End-to-end Material Thermal Conductivity Prediction through Machine
Learning
- URL: http://arxiv.org/abs/2311.03139v1
- Date: Mon, 6 Nov 2023 14:34:30 GMT
- Title: End-to-end Material Thermal Conductivity Prediction through Machine
Learning
- Authors: Yagyank Srivastava and Ankit Jain
- Abstract summary: Machine learning models for thermal conductivity prediction suffer from overfitting.
Best mean absolute percentage error achieved on the test dataset remained in the range of 50-60%.
- Score: 1.5565958456748663
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We investigated the accelerated prediction of the thermal conductivity of
materials through end- to-end structure-based approaches employing machine
learning methods. Due to the non-availability of high-quality thermal
conductivity data, we first performed high-throughput calculations based on
first principles and the Boltzmann transport equation for 225 materials,
effectively more than doubling the size of the existing dataset. We assessed
the performance of state-of-the-art machine learning models for thermal
conductivity prediction on this expanded dataset and observed that all these
models suffered from overfitting. To address this issue, we introduced a novel
graph-based neural network model, which demonstrated more consistent and
regularized performance across all evaluated datasets. Nevertheless, the best
mean absolute percentage error achieved on the test dataset remained in the
range of 50-60%. This suggests that while these models are valuable for
expediting material screening, their current accuracy is still limited.
Related papers
- Physics-Based Hybrid Machine Learning for Critical Heat Flux Prediction with Uncertainty Quantification [4.538224798436768]
This study investigates the development and validation of an uncertainty-aware hybrid modeling approach.
It combines machine learning with physics-based models in the prediction of critical heat flux in nuclear reactors for cases of dryout.
arXiv Detail & Related papers (2025-02-26T17:55:01Z) - Towards Data-Efficient Pretraining for Atomic Property Prediction [51.660835328611626]
We show that pretraining on a task-relevant dataset can match or surpass large-scale pretraining.
We introduce the Chemical Similarity Index (CSI), a novel metric inspired by computer vision's Fr'echet Inception Distance.
arXiv Detail & Related papers (2025-02-16T11:46:23Z) - Transfer Learning for Deep Learning-based Prediction of Lattice Thermal Conductivity [0.0]
We study the impact of transfer learning on the precision and generalizability of a deep learning model (ParAIsite)
We show that a much greater improvement is obtained when first fine-tuning it on a large datasets of low-quality approximations of lattice thermal conductivity (LTC)
The promising results pave the way towards a greater ability to explore large databases in search of low thermal conductivity materials.
arXiv Detail & Related papers (2024-11-27T11:57:58Z) - Predicting ionic conductivity in solids from the machine-learned potential energy landscape [68.25662704255433]
Superionic materials are essential for advancing solid-state batteries, which offer improved energy density and safety.
Conventional computational methods for identifying such materials are resource-intensive and not easily scalable.
We propose an approach for the quick and reliable evaluation of ionic conductivity through the analysis of a universal interatomic potential.
arXiv Detail & Related papers (2024-11-11T09:01:36Z) - Foundation Model for Composite Materials and Microstructural Analysis [49.1574468325115]
We present a foundation model specifically designed for composite materials.
Our model is pre-trained on a dataset of short-fiber composites to learn robust latent features.
During transfer learning, the MMAE accurately predicts homogenized stiffness, with an R2 score reaching as high as 0.959 and consistently exceeding 0.91, even when trained on limited data.
arXiv Detail & Related papers (2024-11-10T19:06:25Z) - Enhancing Indoor Temperature Forecasting through Synthetic Data in Low-Data Environments [42.8983261737774]
We investigate the efficacy of data augmentation techniques leveraging SoTA AI-based methods for synthetic data generation.
Inspired by practical and experimental motivations, we explore fusion strategies of real and synthetic data to improve forecasting models.
arXiv Detail & Related papers (2024-06-07T12:36:31Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Higher-Order Equivariant Neural Networks for Charge Density Prediction in Materials [3.7655047338409893]
ChargE3Net is an E(3)-equivariant graph neural network for predicting electron density in atomic systems.
We show that ChargE3Net exceeds the performance of prior work on diverse sets of molecules and materials.
arXiv Detail & Related papers (2023-12-08T21:56:19Z) - From Prediction to Action: Critical Role of Performance Estimation for
Machine-Learning-Driven Materials Discovery [2.3243389656894595]
We argue that the lack of proper performance estimation methods from pre-computed data collections is a fundamental problem for improving data-driven materials discovery.
We propose a novel such estimator that, in contrast to na"ive reward estimation, successfully predicts Gaussian processes with the "expected improvement" acquisition function.
arXiv Detail & Related papers (2023-11-27T05:29:43Z) - On Data Imbalance in Molecular Property Prediction with Pre-training [16.211138511816642]
A technique called pre-training is used to improve the accuracy of machine learning models.
Pre-training involves training the model on pretext task, which is different from the target task, before training the model on the target task.
In this study, we propose an effective pre-training method that addresses the imbalance in input data.
arXiv Detail & Related papers (2023-08-17T12:04:14Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - Machine-Learning Prediction of the Computed Band Gaps of Double
Perovskite Materials [3.2798940914359056]
Prediction of the electronic structure of functional materials is essential for the engineering of new devices.
In this study, we use machine learning to predict the electronic structure of double perovskite materials.
Our results are significant in the sense that they attest to the potential of machine learning regressions for the rapid screening of promising candidate functional materials.
arXiv Detail & Related papers (2023-01-04T08:19:18Z) - Predicting Defects in Laser Powder Bed Fusion using in-situ Thermal
Imaging Data and Machine Learning [0.0]
Variation in the local thermal history during the laser powder bed fusion process can cause microporosity defects.
In this work, we develop machine learning (ML) models that can use in-situ thermographic data to predict the microporosity of LPBF stainless steel materials.
arXiv Detail & Related papers (2021-12-16T21:25:16Z) - Back2Future: Leveraging Backfill Dynamics for Improving Real-time
Predictions in Future [73.03458424369657]
In real-time forecasting in public health, data collection is a non-trivial and demanding task.
'Backfill' phenomenon and its effect on model performance has been barely studied in the prior literature.
We formulate a novel problem and neural framework Back2Future that aims to refine a given model's predictions in real-time.
arXiv Detail & Related papers (2021-06-08T14:48:20Z) - Churn Reduction via Distillation [54.5952282395487]
We show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn.
We then show that distillation performs strongly for low churn training against a number of recent baselines.
arXiv Detail & Related papers (2021-06-04T18:03:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.