Industrial brain: a human-like autonomous neuro-symbolic cognitive decision-making system
- URL: http://arxiv.org/abs/2506.23926v1
- Date: Mon, 30 Jun 2025 14:54:52 GMT
- Title: Industrial brain: a human-like autonomous neuro-symbolic cognitive decision-making system
- Authors: Junping Wang, Bicheng Wang, Yibo Xuea, Yuan Xie,
- Abstract summary: Industrial brain is a human-like autonomous cognitive decision-making and planning framework.<n>It integrates higher-order activity-driven network and CT-OODA symbolic reasoning to autonomous plan resilience.
- Score: 8.351047624197255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Resilience non-equilibrium measurement, the ability to maintain fundamental functionality amidst failures and errors, is crucial for scientific management and engineering applications of industrial chain. The problem is particularly challenging when the number or types of multiple co-evolution of resilience (for example, randomly placed) are extremely chaos. Existing end-to-end deep learning ordinarily do not generalize well to unseen full-feld reconstruction of spatiotemporal co-evolution structure, and predict resilience of network topology, especially in multiple chaos data regimes typically seen in real-world applications. To address this challenge, here we propose industrial brain, a human-like autonomous cognitive decision-making and planning framework integrating higher-order activity-driven neuro network and CT-OODA symbolic reasoning to autonomous plan resilience directly from observational data of global variable. The industrial brain not only understands and model structure of node activity dynamics and network co-evolution topology without simplifying assumptions, and reveal the underlying laws hidden behind complex networks, but also enabling accurate resilience prediction, inference, and planning. Experimental results show that industrial brain significantly outperforms resilience prediction and planning methods, with an accurate improvement of up to 10.8\% over GoT and OlaGPT framework and 11.03\% over spectral dimension reduction. It also generalizes to unseen topologies and dynamics and maintains robust performance despite observational disturbances. Our findings suggest that industrial brain addresses an important gap in resilience prediction and planning for industrial chain.
Related papers
- Predicting Large-scale Urban Network Dynamics with Energy-informed Graph Neural Diffusion [51.198001060683296]
Networked urban systems facilitate the flow of people, resources, and services.<n>Current models such as graph neural networks have shown promise but face a trade-off between efficacy and efficiency.<n>This paper addresses this trade-off by drawing inspiration from physical laws to inform essential model designs.
arXiv Detail & Related papers (2025-07-31T01:24:01Z) - Advancing network resilience theories with symbolized reinforcement learning [29.97738497697876]
Current resilience theories address the problem from a single perspective of topology, neglecting the crucial role of system dynamics.<n>Here, we report an automatic method for resilience theory discovery, which learns from how AI solves a complicated network dismantling problem.<n>This proposed self-inductive approach discovers the first resilience theory that accounts for both topology and dynamics.
arXiv Detail & Related papers (2025-07-04T19:19:35Z) - Self-orthogonalizing attractor neural networks emerging from the free energy principle [0.0]
We formalize how attractor networks emerge from the free energy principle applied to a universal partitioning of random dynamical systems.<n>Our approach obviates the need for explicitly imposed learning and inference rules.<n>Our findings offer a unifying theory of self-organizing attractor networks, providing novel insights for AI and neuroscience.
arXiv Detail & Related papers (2025-05-28T18:10:03Z) - Learning Interpretable Network Dynamics via Universal Neural Symbolic Regression [5.813728143193046]
We develop a universal computational tool that can automatically, efficiently, and accurately learn the symbolic changing patterns of complex system states.
Results demonstrate the outstanding effectiveness and efficiency of our tool by comparing with the state-of-the-art symbolic regression techniques for network dynamics.
The application to real-world systems including global epidemic transmission and pedestrian movements has verified its practical applicability.
arXiv Detail & Related papers (2024-11-11T09:51:22Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network.<n>We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.<n>We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, quantification, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - TDNetGen: Empowering Complex Network Resilience Prediction with Generative Augmentation of Topology and Dynamics [14.25304439234864]
We introduce a novel resilience prediction framework for complex networks, designed to tackle this issue through generative data augmentation of network topology and dynamics.
Experiment results on three network datasets demonstrate that our proposed framework TDNetGen can achieve high prediction accuracy up to 85%-95%.
arXiv Detail & Related papers (2024-08-19T09:20:31Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Stretched and measured neural predictions of complex network dynamics [2.1024950052120417]
Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems.
A recently employed machine learning tool for studying dynamics is neural networks, which can be used for data-driven solution finding or discovery of differential equations.
We show that extending the model's generalizability beyond traditional statistical learning theory limits is feasible.
arXiv Detail & Related papers (2023-01-12T09:44:59Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Object-based attention for spatio-temporal reasoning: Outperforming
neuro-symbolic models with flexible distributed architectures [15.946511512356878]
We show that a fully-learned neural network with the right inductive biases can perform substantially better than all previous neural-symbolic models.
Our model makes critical use of both self-attention and learned "soft" object-centric representations.
arXiv Detail & Related papers (2020-12-15T18:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.