Is the end of Insight in Sight ?
- URL: http://arxiv.org/abs/2505.04627v2
- Date: Wed, 04 Jun 2025 16:57:55 GMT
- Title: Is the end of Insight in Sight ?
- Authors: Jean-Michel Tucny, Mihir Durve, Sauro Succi,
- Abstract summary: A physics-informed neural network (PINN) trained on a rarefied gas dynamics problem governed by the Boltzmann equation.<n>Despite the system's clear structure and well-understood governing laws, the trained network's weights resemble Gaussian-distributed random matrices.<n>This suggests that deep learning and traditional simulation may follow distinct cognitive paths to the same outcome.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of deep learning challenges the longstanding scientific ideal of insight - the human capacity to understand phenomena by uncovering underlying mechanisms. In many modern applications, accurate predictions no longer require interpretable models, prompting debate about whether explainability is a realistic or even meaningful goal. From our perspective in physics, we examine this tension through a concrete case study: a physics-informed neural network (PINN) trained on a rarefied gas dynamics problem governed by the Boltzmann equation. Despite the system's clear structure and well-understood governing laws, the trained network's weights resemble Gaussian-distributed random matrices, with no evident trace of the physical principles involved. This suggests that deep learning and traditional simulation may follow distinct cognitive paths to the same outcome - one grounded in mechanistic insight, the other in statistical interpolation. Our findings raise critical questions about the limits of explainable AI and whether interpretability can - or should-remain a universal standard in artificial reasoning.
Related papers
- Symbolic or Numerical? Understanding Physics Problem Solving in Reasoning LLMs [12.215295420714787]
This study investigates the application of advanced instruction-tuned reasoning models, such as Deepseek-R1, to address a diverse spectrum of physics problems curated from the challenging SciBench benchmark.<n>Not only do they achieve state-of-the-art accuracy in answering intricate physics questions, but they also generate distinctive reasoning patterns that emphasize on symbolic derivation.
arXiv Detail & Related papers (2025-07-02T03:51:16Z) - A Mathematical Philosophy of Explanations in Mechanistic Interpretability -- The Strange Science Part I.i [0.0]
We argue that Mechanistic Interpretability research is a principled approach to understanding models.<n>We show that Explanatory Faithfulness, an assessment of how well an explanation fits a model, is well-defined.
arXiv Detail & Related papers (2025-05-01T19:08:34Z) - When Counterfactual Reasoning Fails: Chaos and Real-World Complexity [1.9223856107206057]
We investigate the limitations of counterfactual reasoning within the framework of Structural Causal Models.<n>We find that realistic assumptions, such as low degrees of model uncertainty or chaotic dynamics, can result in counterintuitive outcomes.<n>This work urges caution when applying counterfactual reasoning in settings characterized by chaos and uncertainty.
arXiv Detail & Related papers (2025-03-31T08:14:51Z) - Hamiltonian Neural Networks approach to fuzzball geodesics [39.58317527488534]
Hamiltonian Neural Networks (HNNs) are tools that minimize a loss function to solve Hamilton equations of motion.<n>In this work, we implement several HNNs trained to solve, with high accuracy, the Hamilton equations for a massless probe moving inside a smooth and horizonless geometry known as D1-D5 circular fuzzball.
arXiv Detail & Related papers (2025-02-28T09:25:49Z) - Random Matrix Theory for Stochastic Gradient Descent [0.0]
Investigating the dynamics of learning in machine learning algorithms is paramount importance for understanding how and why an approach may be successful.<n>Here we apply concepts from random matrix theory to describe weight matrix dynamics, using the framework of Dyson Brownian motion.<n>We derive the linear scaling rule between the learning rate (step size) and the batch size, and identify universal and non-universal aspects of weight matrix dynamics.
arXiv Detail & Related papers (2024-12-29T15:21:13Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [60.58067866537143]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - From Neurons to Neutrons: A Case Study in Interpretability [5.242869847419834]
We argue that high-dimensional neural networks can learn low-dimensional representations of their training data that are useful beyond simply making good predictions.
This indicates that such approaches to interpretability can be useful for deriving a new understanding of a problem from models trained to solve it.
arXiv Detail & Related papers (2024-05-27T17:59:35Z) - X-VoE: Measuring eXplanatory Violation of Expectation in Physical Events [75.94926117990435]
This study introduces X-VoE, a benchmark dataset to assess AI agents' grasp of intuitive physics.
X-VoE establishes a higher bar for the explanatory capacities of intuitive physics models.
We present an explanation-based learning system that captures physics dynamics and infers occluded object states.
arXiv Detail & Related papers (2023-08-21T03:28:23Z) - Neural Astrophysical Wind Models [0.0]
We show that deep neural networks embedded as individual terms in the governing coupled ordinary differential equations (ODEs) can robustly discover both of these physics.
We optimize a loss function based on the Mach number, rather than the explicitly solved-for 3 conserved variables, and apply a penalty term towards near-diverging solutions.
This work further highlights the feasibility of neural ODEs as a promising discovery tool with mechanistic interpretability for non-linear inverse problems.
arXiv Detail & Related papers (2023-06-20T16:37:57Z) - Potentiality realism: A realistic and indeterministic physics based on
propensities [0.0]
We discuss our specific interpretation of propensities, that require them to depart from being probabilities at the formal level.
This view helps reconcile classical and quantum physics by showing that most of the conceptual problems that are customarily taken to be unique issues of the latter are actually in common to all indeterministic physical theories.
arXiv Detail & Related papers (2023-05-03T21:01:17Z) - Computational Complexity of Learning Neural Networks: Smoothness and
Degeneracy [52.40331776572531]
We show that learning depth-$3$ ReLU networks under the Gaussian input distribution is hard even in the smoothed-analysis framework.
Our results are under a well-studied assumption on the existence of local pseudorandom generators.
arXiv Detail & Related papers (2023-02-15T02:00:26Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - Quantum realism: axiomatization and quantification [77.34726150561087]
We build an axiomatization for quantum realism -- a notion of realism compatible with quantum theory.
We explicitly construct some classes of entropic quantifiers that are shown to satisfy almost all of the proposed axioms.
arXiv Detail & Related papers (2021-10-10T18:08:42Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.