Physics Enhanced Deep Surrogates for the Phonon Boltzmann Transport Equation
- URL: http://arxiv.org/abs/2512.05976v1
- Date: Tue, 25 Nov 2025 16:25:24 GMT
- Title: Physics Enhanced Deep Surrogates for the Phonon Boltzmann Transport Equation
- Authors: Antonio Varagnolo, Giuseppe Romano, Raphaƫl Pestourie,
- Abstract summary: Physics-Enhanced Deep Surrogate (PEDS)<n>Network learns geometry-dependent corrections and a mixing coefficient that interpolates between macroscopic and nano-scale behavior.<n>PEDS reduces training-data requirements by up to 70% compared with purely data-driven baselines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing materials with controlled heat flow at the nano-scale is central to advances in microelectronics, thermoelectrics, and energy-conversion technologies. At these scales, phonon transport follows the Boltzmann Transport Equation (BTE), which captures non-diffusive (ballistic) effects but is too costly to solve repeatedly in inverse-design loops. Existing surrogate approaches trade speed for accuracy: fast macroscopic solvers can overestimate conductivities by hundreds of percent, while recent data-driven operator learners often require thousands of high-fidelity simulations. This creates a need for a fast, data-efficient surrogate that remains reliable across ballistic and diffusive regimes. We introduce a Physics-Enhanced Deep Surrogate (PEDS) that combines a differentiable Fourier solver with a neural generator and couples it with uncertainty-driven active learning. The Fourier solver acts as a physical inductive bias, while the network learns geometry-dependent corrections and a mixing coefficient that interpolates between macroscopic and nano-scale behavior. PEDS reduces training-data requirements by up to 70% compared with purely data-driven baselines, achieves roughly 5% fractional error with only 300 high-fidelity BTE simulations, and enables efficient design of porous geometries spanning 12-85 W m$^{-1}$ K$^{-1}$ with average design errors of 4%. The learned mixing parameter recovers the ballistic-diffusive transition and improves out of distribution robustness. These results show that embedding simple, differentiable low-fidelity physics can dramatically increase surrogate data-efficiency and interpretability, making repeated PDE-constrained optimization practical for nano-scale thermal-materials design.
Related papers
- Physics-informed Neural Operator Learning for Nonlinear Grad-Shafranov Equation [18.564353542797946]
In magnetic confinement nuclear fusion, rapid and accurate solution of the Grad-Shafranov equation (GSE) is essential for real-time plasma control and analysis.<n>Traditional numerical solvers achieve high precision but are computationally prohibitive, while data-driven surrogates infer quickly but fail to enforce physical laws and generalize poorly beyond training distributions.<n>We present a Physics-Informed Neural Operator (PINO) that directly learns the GSE solution operator, mapping shape parameters of last closed flux surface to equilibrium solutions for realistic nonlinear current profiles.
arXiv Detail & Related papers (2025-11-24T13:46:38Z) - Physics-Constrained Adaptive Neural Networks Enable Real-Time Semiconductor Manufacturing Optimization with Minimal Training Data [0.0]
semiconductor industry faces a computational crisis in extreme ultraviolet (EUV) lithography optimization.<n>We present a physics-constrained adaptive learning framework that automatically calibrates electromagnetic approximations.<n>We demonstrate consistent sub-nanometer EPE performance (0.664-2.536 nm range) using only 50 training samples per pattern.
arXiv Detail & Related papers (2025-11-16T21:40:57Z) - Fast and Generalizable parameter-embedded Neural Operators for Lithium-Ion Battery Simulation [1.099532646524593]
We benchmark three operator-learning surrogates for the Single Particle Model (SPM): Deep Operator Networks (DeepONets), Fourier Neural Operators (FNOs) and a newly proposed parameter-embedded Fourier Neural Operator (PE-FNO)<n>DeepONet accurately replicates constant-current behaviour but struggles with more dynamic loads. FNO maintains mesh invariance and keeps concentration errors below 1 %, with voltage mean-absolute errors under 1.7 mV across all load types. PE-FNO executes approximately 200 times faster than a 16-thread SPM solver.
arXiv Detail & Related papers (2025-08-11T15:31:23Z) - Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout [62.73150122809138]
Federated Learning (FL) is a promising distributed machine learning approach that enables collaborative training of a global model using multiple edge devices.<n>We propose the FedDHAD FL framework, which comes with two novel methods: Dynamic Heterogeneous model aggregation (FedDH) and Adaptive Dropout (FedAD)<n>The combination of these two methods makes FedDHAD significantly outperform state-of-the-art solutions in terms of accuracy (up to 6.7% higher), efficiency (up to 2.02 times faster), and cost (up to 15.0% smaller)
arXiv Detail & Related papers (2025-07-14T16:19:00Z) - PhysicsCorrect: A Training-Free Approach for Stable Neural PDE Simulations [4.7903561901859355]
We present PhysicsCorrect, a training-free correction framework that enforces PDE consistency at each prediction step.<n>Our key innovation is an efficient caching strategy that precomputes the Jacobian and its pseudoinverse during an offline warm-up phase.<n>Across three representative PDE systems, PhysicsCorrect reduces prediction errors by up to 100x while adding negligible inference time.
arXiv Detail & Related papers (2025-07-03T01:22:57Z) - OmniFluids: Physics Pre-trained Modeling of Fluid Dynamics [25.066485418709114]
We propose OmniFluids, a pure physics pre-trained model that captures fundamental fluid dynamics laws and adapts efficiently to diverse downstream tasks.<n>We develop a training framework combining physics-only pre-training, coarse-grid operator distillation, and few-shot fine-tuning.<n>Tests show that OmniFluids outperforms state-of-the-art AI-driven methods in terms of flow field prediction and statistics.
arXiv Detail & Related papers (2025-06-12T16:23:02Z) - Accurate Ab-initio Neural-network Solutions to Large-Scale Electronic Structure Problems [52.19558333652367]
We present finite-range embeddings (FiRE) for accurate large-scale ab-initio electronic structure calculations.<n>FiRE reduces the complexity of neural-network variational Monte Carlo (NN-VMC) by $sim ntextel$, the number of electrons.<n>We validate our method's accuracy on various challenging systems, including biochemical compounds and organometallic compounds.
arXiv Detail & Related papers (2025-04-08T14:28:54Z) - TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training [91.8932638236073]
We introduce textbfTensorGRaD, a novel method that directly addresses the memory challenges associated with large-structured weights.<n>We show that sparseGRaD reduces total memory usage by over $50%$ while maintaining and sometimes even improving accuracy.
arXiv Detail & Related papers (2025-01-04T20:51:51Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Physics-enhanced deep surrogates for partial differential equations [30.731686639510517]
We present a "physics-enhanced deep-surrogate" ("PEDS") approach towards developing fast surrogate models for complex physical systems.
Specifically, a combination of a low-fidelity, explainable physics simulator and a neural network generator is proposed, which is trained end-to-end to globally match the output of an expensive high-fidelity numerical solver.
arXiv Detail & Related papers (2021-11-10T18:43:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.