Extended Low-Rank Approximation Accelerates Learning of Elastic Response in Heterogeneous Materials
- URL: http://arxiv.org/abs/2509.20276v1
- Date: Wed, 24 Sep 2025 16:13:41 GMT
- Title: Extended Low-Rank Approximation Accelerates Learning of Elastic Response in Heterogeneous Materials
- Authors: Prabhat Karmakar, Sayan Gupta, Ilaksh Adlakha,
- Abstract summary: This work presents the Extended Low Rank Approximation (xLRA), a framework that employs canonical polyadic tensor decomposition.<n>It efficiently maps high dimensional microstructural information to the local elastic response by adaptively incorporating higher rank terms.<n>The compact formulation of xLRA achieves accurate predictions when trained on just 5% of the dataset.<n> Benchmarking shows that xLRA outperforms contemporary methods in predictive accuracy, generalizability, and computational efficiency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Predicting how the microstructure governs the mechanical response of heterogeneous materials is essential for optimizing design and performance. Yet this task remains difficult due to the complex, high dimensional nature of microstructural features. Relying on physics based simulations to probe the microstructural space is computationally prohibitive. This motivates the development of computational tools to efficiently learn structure property linkages governing mechanical behavior. While contemporary data driven approaches offer new possibilities, they often require large datasets. To address this challenge, this work presents the Extended Low Rank Approximation (xLRA), a framework that employs canonical polyadic tensor decomposition. It efficiently maps high dimensional microstructural information to the local elastic response by adaptively incorporating higher rank terms. xLRA accurately predicts the local elastic strain fields in porous microstructures, requiring a maximum rank of only 4. The compact formulation of xLRA achieves accurate predictions when trained on just 5% of the dataset, demonstrating significant data efficiency. Moreover, xLRA proves transferability by delivering results across representative material systems, including two phase composites and single and dual phase polycrystals. Despite being compact, xLRA retains essential microstructural details, enabling accurate predictions on unseen microstructures. Benchmarking shows that xLRA outperforms contemporary methods in predictive accuracy, generalizability, and computational efficiency, while requiring 6 orders of magnitude fewer floating point operations. In summary, xLRA provides an efficient framework for predicting the elastic response from microstructures, enabling scalable mapping of structure property linkages.
Related papers
- Self-Correction Distillation for Structured Data Question Answering [50.98882432829651]
Small-scale language models (LLMs) are prone to errors in generating structured queries.<n>We propose a self-correction distillation (SCD) method to improve the structured data QA ability of small-scale LLMs.
arXiv Detail & Related papers (2025-11-11T09:01:51Z) - XxaCT-NN: Structure Agnostic Multimodal Learning for Materials Science [0.27185251060695437]
We propose a scalable framework that learns directly from elemental composition and X-ray diffraction (XRD)<n>Our architecture integrates modality-specific encoders with a cross-attention fusion module and is trained on the 5-million-sample Alexandria dataset.<n>Our results establish a path toward structure-free, experimentally grounded foundation models for materials science.
arXiv Detail & Related papers (2025-06-27T21:45:56Z) - Capacity Matters: a Proof-of-Concept for Transformer Memorization on Real-World Data [6.885357232728911]
This paper studies how the model architecture and data configurations influence the empirical memorization capacity of generative transformers.<n>The models are trained using synthetic text datasets derived from the Systematized Nomenclature of Medicine (SNOMED)
arXiv Detail & Related papers (2025-06-17T16:42:54Z) - Spectra-to-Structure and Structure-to-Spectra Inference Across the Periodic Table [49.65586812435899]
XAStruct is a learning-based system capable of both predicting XAS spectra from crystal structures and inferring local structural descriptors from XAS input.<n>XAStruct is trained on a large-scale dataset spanning over 70 elements across the periodic table.
arXiv Detail & Related papers (2025-06-13T15:58:05Z) - High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations [51.90920900332569]
Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data.<n>Recent approaches address this by introducing additional features along rigid geometric structures.<n>We propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR)
arXiv Detail & Related papers (2025-06-07T16:45:17Z) - Curiosity Driven Exploration to Optimize Structure-Property Learning in Microscopy [0.4517077427559345]
We present an alternative lightweight curiosity algorithm which actively samples regions with unexplored structure-property relations.<n>We show that the algorithm outperforms random sampling for predicting properties from structures, and provides a convenient tool for efficient mapping of structure-property relationships in materials science.
arXiv Detail & Related papers (2025-04-28T17:31:29Z) - Spectral Normalization and Voigt-Reuss net: A universal approach to microstructure-property forecasting with physical guarantees [0.0]
A crucial step in the design process is the rapid evaluation of effective mechanical, thermal, or, in general, elasticity properties.<n>The classical simulation-based approach, which uses, e.g., finite elements and FFT-based solvers, can require substantial computational resources.<n>We propose a novel spectral normalization scheme that a priori enforces these bounds.
arXiv Detail & Related papers (2025-04-01T12:21:57Z) - Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices [88.33936714942996]
We present a unifying framework that enables searching among all linear operators expressible via an Einstein summation.
We show that differences in the compute-optimal scaling laws are mostly governed by a small number of variables.
We find that Mixture-of-Experts (MoE) learns an MoE in every single linear layer of the model, including the projection in the attention blocks.
arXiv Detail & Related papers (2024-10-03T00:44:50Z) - How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations [98.7450564309923]
This paper takes initial steps on understanding in-context learning (ICL) in more complex scenarios, by studying learning with representations.
We construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function.
We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size.
arXiv Detail & Related papers (2023-10-16T17:40:49Z) - Self-supervised optimization of random material microstructures in the
small-data regime [0.0]
This paper presents a flexible, fully probabilistic formulation of such optimization problems that accounts for the uncertainty in the process-structure and structure-property linkages.
We employ a probabilistic, data-driven surrogate for the structure-property link which expedites computations and enables handling of non-differential objectives.
We demonstrate its efficacy in optimizing the mechanical and thermal properties of two-phase, random media but envision its applicability encompasses a wide variety of microstructure-sensitive design problems.
arXiv Detail & Related papers (2021-08-05T13:25:39Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - New advances in enumerative biclustering algorithms with online
partitioning [80.22629846165306]
This paper further extends RIn-Close_CVC, a biclustering algorithm capable of performing an efficient, complete, correct and non-redundant enumeration of maximal biclusters with constant values on columns in numerical datasets.
The improved algorithm is called RIn-Close_CVC3, keeps those attractive properties of RIn-Close_CVC, and is characterized by: a drastic reduction in memory usage; a consistent gain in runtime.
arXiv Detail & Related papers (2020-03-07T14:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.