Multimodal Learning for Materials
- URL: http://arxiv.org/abs/2312.00111v3
- Date: Fri, 12 Apr 2024 14:17:34 GMT
- Title: Multimodal Learning for Materials
- Authors: Viggo Moro, Charlotte Loh, Rumen Dangovski, Ali Ghorashi, Andrew Ma, Zhuo Chen, Samuel Kim, Peter Y. Lu, Thomas Christensen, Marin Soljačić,
- Abstract summary: We introduce Multimodal Learning for Materials (MultiMat), which enables self-supervised multi-modality training of foundation models for materials.
We demonstrate our framework's potential using data from the Materials Project database on multiple axes.
- Score: 7.167520424757711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence is transforming computational materials science, improving the prediction of material properties, and accelerating the discovery of novel materials. Recently, publicly available material data repositories have grown rapidly. This growth encompasses not only more materials, but also a greater variety and quantity of their associated properties. Existing machine learning efforts in materials science focus primarily on single-modality tasks, i.e., relationships between materials and a single physical property, thus not taking advantage of the rich and multimodal set of material properties. Here, we introduce Multimodal Learning for Materials (MultiMat), which enables self-supervised multi-modality training of foundation models for materials. We demonstrate our framework's potential using data from the Materials Project database on multiple axes: (i) MultiMat achieves state-of-the-art performance for challenging material property prediction tasks; (ii) MultiMat enables novel and accurate material discovery via latent space similarity, enabling screening for stable materials with desired properties; and (iii) MultiMat encodes interpretable emergent features that may provide novel scientific insights.
Related papers
- Discovery of sustainable energy materials via the machine-learned material space [0.0]
We show that a machine learning model can gain an understanding of the material space without user-induced bias.
We show how the learned material space can be used to identify more sustainable alternatives to critical materials in energy-related technologies.
arXiv Detail & Related papers (2025-01-10T12:00:08Z) - DARWIN 1.5: Large Language Models as Materials Science Adapted Learners [46.7259033847682]
We propose DARWIN 1.5, the largest open-source large language model tailored for materials science.
DARWIN eliminates the need for task-specific descriptors and enables a flexible, unified approach to material property prediction and discovery.
Our approach integrates 6M material domain papers and 21 experimental datasets from 49,256 materials across modalities while enabling cross-task knowledge transfer.
arXiv Detail & Related papers (2024-12-16T16:51:27Z) - Foundation Model for Composite Materials and Microstructural Analysis [0.0]
We present a foundation model specifically designed for composite materials.
Our findings validate the feasibility and effectiveness of foundation models in composite materials.
This framework enables high-accuracy predictions even when experimental data are scarce.
arXiv Detail & Related papers (2024-11-10T19:06:25Z) - Knowledge-Aware Reasoning over Multimodal Semi-structured Tables [85.24395216111462]
This study investigates whether current AI models can perform knowledge-aware reasoning on multimodal structured data.
We introduce MMTabQA, a new dataset designed for this purpose.
Our experiments highlight substantial challenges for current AI models in effectively integrating and interpreting multiple text and image inputs.
arXiv Detail & Related papers (2024-08-25T15:17:43Z) - Multi-Task Multi-Fidelity Learning of Properties for Energetic Materials [34.8008617873679]
We find that multi-task neural networks can learn from multi-modal data and outperform single-task models trained for specific properties.
As expected, the improvement is more significant for data-scarce properties.
This approach is widely applicable to fields outside energetic materials.
arXiv Detail & Related papers (2024-08-21T12:54:26Z) - OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction [54.706361479680055]
We introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials.
OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.
It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials.
arXiv Detail & Related papers (2024-06-13T07:46:17Z) - Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materials [108.59709545364395]
GPT-4V can effectively recognize and describe materials, allowing the construction of a detailed material library.
The correctly matched materials are then meticulously applied as reference for the new SVBRDF material generation.
Make-it-Real offers a streamlined integration into the 3D content creation workflow.
arXiv Detail & Related papers (2024-04-25T17:59:58Z) - Advancing Extrapolative Predictions of Material Properties through Learning to Learn [1.3274508420845539]
We use attention-based architecture of neural networks and meta-learning algorithms to acquire extrapolative generalization capability.
We highlight the potential of such extrapolatively trained models, particularly with their ability to rapidly adapt to unseen material domains.
arXiv Detail & Related papers (2024-03-25T09:30:19Z) - Alchemist: Parametric Control of Material Properties with Diffusion
Models [51.63031820280475]
Our method capitalizes on the generative prior of text-to-image models known for photorealism.
We show the potential application of our model to material edited NeRFs.
arXiv Detail & Related papers (2023-12-05T18:58:26Z) - Multimodal machine learning for materials science: composition-structure
bimodal learning for experimentally measured properties [4.495968252019426]
This paper introduces a novel approach to multimodal machine learning in materials science via composition-structure bimodal learning.
The proposed COmposition-Structure Bimodal Network (COSNet) is designed to enhance learning and predictions of experimentally measured materials properties that have incomplete structure information.
arXiv Detail & Related papers (2023-08-04T02:04:52Z) - How to See Hidden Patterns in Metamaterials with Interpretable Machine
Learning [82.67551367327634]
We develop a new interpretable, multi-resolution machine learning framework for finding patterns in the unit-cells of materials.
Specifically, we propose two new interpretable representations of metamaterials, called shape-frequency features and unit-cell templates.
arXiv Detail & Related papers (2021-11-10T21:19:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.