Neuro-Symbolic Operator for Interpretable and Generalizable Characterization of Complex Piezoelectric Systems
- URL: http://arxiv.org/abs/2505.24578v1
- Date: Fri, 30 May 2025 13:28:11 GMT
- Title: Neuro-Symbolic Operator for Interpretable and Generalizable Characterization of Complex Piezoelectric Systems
- Authors: Abhishek Chandra, Taniya Kapoor, Mitrofan Curti, Koen Tiels, Elena A. Lomonova,
- Abstract summary: This paper presents a neuro-symbolic operator framework that derives the analytical operators governing hysteretic relationships.<n>It contributes to characterizing complex piezoelectric systems while improving interpretability and generalizability of neural operators.
- Score: 0.7817677116789855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex piezoelectric systems are foundational in industrial applications. Their performance, however, is challenged by the nonlinear voltage-displacement hysteretic relationships. Efficient characterization methods are, therefore, essential for reliable design, monitoring, and maintenance. Recently proposed neural operator methods serve as surrogates for system characterization but face two pressing issues: interpretability and generalizability. State-of-the-art (SOTA) neural operators are black-boxes, providing little insight into the learned operator. Additionally, generalizing them to novel voltages and predicting displacement profiles beyond the training domain is challenging, limiting their practical use. To address these limitations, this paper proposes a neuro-symbolic operator (NSO) framework that derives the analytical operators governing hysteretic relationships. NSO first learns a Fourier neural operator mapping voltage fields to displacement profiles, followed by a library-based sparse model discovery method, generating white-box parsimonious models governing the underlying hysteresis. These models enable accurate and interpretable prediction of displacement profiles across varying and out-of-distribution voltage fields, facilitating generalizability. The potential of NSO is demonstrated by accurately predicting voltage-displacement hysteresis, including butterfly-shaped relationships. Moreover, NSO predicts displacement profiles even for noisy and low-fidelity voltage data, emphasizing its robustness. The results highlight the advantages of NSO compared to SOTA neural operators and model discovery methods on several evaluation metrics. Consequently, NSO contributes to characterizing complex piezoelectric systems while improving the interpretability and generalizability of neural operators, essential for design, monitoring, maintenance, and other real-world scenarios.
Related papers
- NOBLE -- Neural Operator with Biologically-informed Latent Embeddings to Capture Experimental Variability in Biological Neuron Models [68.89389652724378]
NOBLE is a neural operator framework that learns a mapping from a continuous frequency-modulated embedding of interpretable neuron features to the somatic voltage response induced by current injection.<n>It predicts distributions of neural dynamics accounting for the intrinsic experimental variability.<n>NOBLE is the first scaled-up deep learning framework validated on real experimental data.
arXiv Detail & Related papers (2025-06-05T01:01:18Z) - The Spectral Bias of Shallow Neural Network Learning is Shaped by the Choice of Non-linearity [0.7499722271664144]
We study how non-linear activation functions contribute to shaping neural networks' implicit bias.<n>We show that local dynamical attractors facilitate the formation of clusters of hyperplanes where the input to a neuron's activation function is zero.
arXiv Detail & Related papers (2025-03-13T17:36:46Z) - Pseudo-Physics-Informed Neural Operators: Enhancing Operator Learning from Limited Data [17.835190275166408]
We propose the Pseudo Physics-Informed Neural Operator (PPI-NO) framework.<n> PPI-NO constructs a surrogate physics system for the target system using partial differential equations (PDEs) derived from basic differential operators.<n>This framework significantly improves the accuracy of standard operator learning models in data-scarce scenarios.
arXiv Detail & Related papers (2025-02-04T19:50:06Z) - Characterizing out-of-distribution generalization of neural networks: application to the disordered Su-Schrieffer-Heeger model [38.79241114146971]
We show how interpretability methods can increase trust in predictions of a neural network trained to classify quantum phases.
In particular, we show that we can ensure better out-of-distribution generalization in the complex classification problem.
This work is an example of how the systematic use of interpretability methods can improve the performance of NNs in scientific problems.
arXiv Detail & Related papers (2024-06-14T13:24:32Z) - Peridynamic Neural Operators: A Data-Driven Nonlocal Constitutive Model
for Complex Material Responses [12.454290779121383]
We introduce a novel integral neural operator architecture called the Peridynamic Neural Operator (PNO) that learns a nonlocal law from data.
This neural operator provides a forward model in the form of state-based peridynamics, with objectivity and momentum balance laws automatically guaranteed.
We show that, owing to its ability to capture complex responses, our learned neural operator achieves improved accuracy and efficiency compared to baseline models.
arXiv Detail & Related papers (2024-01-11T17:37:20Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - INO: Invariant Neural Operators for Learning Complex Physical Systems
with Momentum Conservation [8.218875461185016]
We introduce a novel integral neural operator architecture, to learn physical models with fundamental conservation laws automatically guaranteed.
As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental datasets.
arXiv Detail & Related papers (2022-12-29T16:40:41Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.