Physics-Informed Gaussian Process Regression for the Constitutive Modeling of Concrete: A Data-Driven Improvement to Phenomenological Models
- URL: http://arxiv.org/abs/2601.03367v1
- Date: Tue, 06 Jan 2026 19:09:40 GMT
- Title: Physics-Informed Gaussian Process Regression for the Constitutive Modeling of Concrete: A Data-Driven Improvement to Phenomenological Models
- Authors: Chenyang Li, Himanshu Sharma, Youcai Wu, Joseph Magallanes, K. T. Ramesh, Michael D. Shields,
- Abstract summary: This work develops a physics-informed framework that retains the modular elastoplastic structure of the Karagozian & Case concrete model.<n>It replaces the empirical failure surface with a constrained Gaussian Process Regression surrogate that can be learned directly from experimentally accessible observables.<n>Results show that an unconstrained GPR interpolates well near training conditions but deteriorates and violates essential physical constraints under extrapolation.
- Score: 15.576831245374906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding and modeling the constitutive behavior of concrete is crucial for civil and defense applications, yet widely used phenomenological models such as Karagozian \& Case concrete (KCC) model depend on empirically calibrated failure surfaces that lack flexibility in model form and associated uncertainty quantification. This work develops a physics-informed framework that retains the modular elastoplastic structure of KCC model while replacing its empirical failure surface with a constrained Gaussian Process Regression (GPR) surrogate that can be learned directly from experimentally accessible observables. Triaxial compression data under varying confinement levels are used for training, and the surrogate is then evaluated at confinement levels not included in the training set to assess its generalization capability. Results show that an unconstrained GPR interpolates well near training conditions but deteriorates and violates essential physical constraints under extrapolation, even when augmented with simulated data. In contrast, a physics-informed GPR that incorporates derivative-based constraints aligned with known material behavior yields markedly better accuracy and reliability, including at higher confinement levels beyond the training range. Probabilistic enforcement of these constraints also reduces predictive variance, producing tighter confidence intervals in data-scarce regimes. Overall, the proposed approach delivers a robust, uncertainty-aware surrogate that improves generalization and streamlines calibration without sacrificing the interpretability and numerical efficiency of the KCC model, offering a practical path toward an improved constitutive models for concrete.
Related papers
- Hybrid Model Predictive Control with Physics-Informed Neural Network for Satellite Attitude Control [2.7222301668137483]
Reliable spacecraft attitude control depends on accurate prediction of attitude dynamics.<n>For spacecraft with complex dynamics, obtaining accurate physics-based models can be difficult, time-consuming, or computationally heavy.<n>This work explores Physics-Informed Neural Networks (PINNs) for modeling spacecraft attitude dynamics.
arXiv Detail & Related papers (2026-02-17T19:08:48Z) - Another Fit Bites the Dust: Conformal Prediction as a Calibration Standard for Machine Learning in High-Energy Physics [0.0]
Conformal prediction provides a distribution-free framework for calibrating arbitrary predictive models.<n>We show that a single conformal formalism can be applied across regression, binary and multi-class classification, anomaly detection, and generative modelling.<n>We argue that conformal calibration should be adopted as a standard component of machine-learning pipelines in collider physics.
arXiv Detail & Related papers (2025-12-18T20:31:25Z) - Hybrid twinning using PBDW and DeepONet for the effective state estimation and prediction on partially known systems [0.0]
We propose an effective hybrid approach that combines physics-based modeling with data-driven learning to enhance state estimation.<n>We validate the proposed approach on a representative problem involving the Helmholtz equation.
arXiv Detail & Related papers (2025-12-03T12:19:00Z) - CoRA: Covariate-Aware Adaptation of Time Series Foundation Models [47.20786327020571]
Time Series Foundation Models (TSFMs) have shown significant impact through their model capacity, scalability, and zero-shot generalizations.<n>We propose a general covariate-aware adaptation (CoRA) framework for TSFMs.
arXiv Detail & Related papers (2025-10-14T16:20:00Z) - Bi-level Meta-Policy Control for Dynamic Uncertainty Calibration in Evidential Deep Learning [11.953394478206581]
We propose the Meta-Policy Controller (MPC), a dynamic meta-learning framework that adjusts the KL divergence coefficient and Dirichlet prior strengths for optimal uncertainty modeling.<n>MPC significantly enhances the reliability and calibration of model predictions across various tasks, improving uncertainty calibration, prediction accuracy, and performance retention after confidence-based sample rejection.
arXiv Detail & Related papers (2025-10-10T02:39:26Z) - Robust Molecular Property Prediction via Densifying Scarce Labeled Data [53.24886143129006]
In drug discovery, compounds most critical for advancing research often lie beyond the training set.<n>We propose a novel bilevel optimization approach that leverages unlabeled data to interpolate between in-distribution (ID) and out-of-distribution (OOD) data.
arXiv Detail & Related papers (2025-06-13T15:27:40Z) - CLUE: Neural Networks Calibration via Learning Uncertainty-Error alignment [7.702016079410588]
We introduce CLUE (Calibration via Learning Uncertainty-Error Alignment), a novel approach that aligns predicted uncertainty with observed error during training.<n>We show that CLUE achieves superior calibration quality and competitive predictive performance with respect to state-of-the-art approaches.
arXiv Detail & Related papers (2025-05-28T19:23:47Z) - Model Hemorrhage and the Robustness Limits of Large Language Models [119.46442117681147]
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment.<n>We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes.
arXiv Detail & Related papers (2025-03-31T10:16:03Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Semi-supervised Regression Analysis with Model Misspecification and High-dimensional Data [8.619243141968886]
We present an inference framework for estimating regression coefficients in conditional mean models.
We develop an augmented inverse probability weighted (AIPW) method, employing regularized estimators for both propensity score (PS) and outcome regression (OR) models.
Our theoretical findings are verified through extensive simulation studies and a real-world data application.
arXiv Detail & Related papers (2024-06-20T00:34:54Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.