Towards explainable decision support using hybrid neural models for logistic terminal automation
- URL: http://arxiv.org/abs/2509.07577v2
- Date: Wed, 10 Sep 2025 08:04:57 GMT
- Title: Towards explainable decision support using hybrid neural models for logistic terminal automation
- Authors: Riccardo D'Elia, Alberto Termine, Francesco Flammini,
- Abstract summary: This paper presents a novel framework for interpretable-by-design neural system dynamics modeling.<n>The proposed hybrid approach enables the construction of neural network models that operate on semantically meaningful and actionable variables.<n>The framework is conceived to be applied to real-world case-studies from the EU-funded project AutoMoTIF.
- Score: 1.5364433104428317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of Deep Learning (DL) in System Dynamics (SD) modeling for transportation logistics offers significant advantages in scalability and predictive accuracy. However, these gains are often offset by the loss of explainability and causal reliability $-$ key requirements in critical decision-making systems. This paper presents a novel framework for interpretable-by-design neural system dynamics modeling that synergizes DL with techniques from Concept-Based Interpretability, Mechanistic Interpretability, and Causal Machine Learning. The proposed hybrid approach enables the construction of neural network models that operate on semantically meaningful and actionable variables, while retaining the causal grounding and transparency typical of traditional SD models. The framework is conceived to be applied to real-world case-studies from the EU-funded project AutoMoTIF, focusing on data-driven decision support, automation, and optimization of multimodal logistic terminals. We aim at showing how neuro-symbolic methods can bridge the gap between black-box predictive models and the need for critical decision support in complex dynamical environments within cyber-physical systems enabled by the industrial Internet-of-Things.
Related papers
- Combining feature-based approaches with graph neural networks and symbolic regression for synergistic performance and interpretability [0.0]
MatterVial is an innovative hybrid framework for feature-based machine learning in materials science.<n>Our approach combines the chemical transparency of traditional feature-based models with the predictive power of deep learning architectures.<n>An integrated interpretability module, employing surrogate models and symbolic regression, decodes the latent GNN-derived descriptors into explicit, physically meaningful formulas.
arXiv Detail & Related papers (2025-09-02T16:45:02Z) - Interpretable Neural System Dynamics: Combining Deep Learning with System Dynamics Modeling to Support Critical Applications [0.0]
This proposal aims to bridge the gap between Deep Learning (DL) and System Dynamics (SD) by developing an interpretable neural system dynamics framework.<n>The efficacy of the proposed pipeline will be validated through real-world applications of the EU-funded AutoMoTIF project.
arXiv Detail & Related papers (2025-05-20T14:38:39Z) - Switch-Based Multi-Part Neural Network [0.15749416770494706]
Decentralized and modular neural network framework designed to enhance the scalability, interpretability, and performance of AI systems.<n>At the heart of this framework is a dynamic switch mechanism that governs the selective activation and training of individual neurons.
arXiv Detail & Related papers (2025-04-25T10:39:42Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.<n>Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Physics-Informed Machine Learning for Seismic Response Prediction OF Nonlinear Steel Moment Resisting Frame Structures [6.483318568088176]
PiML method integrates scientific principles and physical laws into deep neural networks to model seismic responses of nonlinear structures.
Manipulating the equation of motion helps learn system nonlinearities and confines solutions within physically interpretable results.
Result handles complex data better than existing physics-guided LSTM models and outperforms other non-physics data-driven networks.
arXiv Detail & Related papers (2024-02-28T02:16:03Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - A Transition System Abstraction Framework for Neural Network Dynamical
System Models [2.414910571475855]
This paper proposes a transition system abstraction framework for neural network dynamical system models.
The framework is able to abstract a data-driven neural network model into a transition system, making the neural network model interpretable.
arXiv Detail & Related papers (2024-02-18T23:49:18Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Physics-guided Deep Markov Models for Learning Nonlinear Dynamical
Systems with Uncertainty [6.151348127802708]
We propose a physics-guided framework, termed Physics-guided Deep Markov Model (PgDMM)
The proposed framework takes advantage of the expressive power of deep learning, while retaining the driving physics of the dynamical system.
arXiv Detail & Related papers (2021-10-16T16:35:12Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.