Reasoning Algorithmically in Graph Neural Networks
- URL: http://arxiv.org/abs/2402.13744v1
- Date: Wed, 21 Feb 2024 12:16:51 GMT
- Title: Reasoning Algorithmically in Graph Neural Networks
- Authors: Danilo Numeroso
- Abstract summary: We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
- Score: 1.8130068086063336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of artificial intelligence systems with advanced reasoning
capabilities represents a persistent and long-standing research question.
Traditionally, the primary strategy to address this challenge involved the
adoption of symbolic approaches, where knowledge was explicitly represented by
means of symbols and explicitly programmed rules. However, with the advent of
machine learning, there has been a paradigm shift towards systems that can
autonomously learn from data, requiring minimal human guidance. In light of
this shift, in latest years, there has been increasing interest and efforts at
endowing neural networks with the ability to reason, bridging the gap between
data-driven learning and logical reasoning. Within this context, Neural
Algorithmic Reasoning (NAR) stands out as a promising research field, aiming to
integrate the structured and rule-based reasoning of algorithms with the
adaptive learning capabilities of neural networks, typically by tasking neural
models to mimic classical algorithms. In this dissertation, we provide
theoretical and practical contributions to this area of research. We explore
the connections between neural networks and tropical algebra, deriving powerful
architectures that are aligned with algorithm execution. Furthermore, we
discuss and show the ability of such neural reasoners to learn and manipulate
complex algorithmic and combinatorial optimization concepts, such as the
principle of strong duality. Finally, in our empirical efforts, we validate the
real-world utility of NAR networks across different practical scenarios. This
includes tasks as diverse as planning problems, large-scale edge classification
tasks and the learning of polynomial-time approximate algorithms for NP-hard
combinatorial problems. Through this exploration, we aim to showcase the
potential integrating algorithmic reasoning in machine learning models.
Related papers
- NAR-*ICP: Neural Execution of Classical ICP-based Pointcloud Registration Algorithms [7.542220697870245]
This study explores the intersection of neural networks and classical robotics algorithms through the Neural Algorithmic Reasoning framework.
We propose a Graph Neural Network (GNN)-based learning framework, NAR-*ICP, which learns the intermediate algorithmic steps of classical ICP-based pointcloud registration algorithms.
We evaluate our approach across diverse datasets, from real-world to synthetic, demonstrating its flexibility in handling complex and noisy inputs.
arXiv Detail & Related papers (2024-10-14T19:33:46Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds [12.037840490243603]
We investigate the internal mechanisms of neural networks through the lens of neural population geometry.
We quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models.
These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry.
arXiv Detail & Related papers (2023-12-21T20:40:51Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - Brain-Inspired Learning on Neuromorphic Substrates [5.279475826661643]
This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates.
Specifically, we show a direct connection between Real-Time Recurrent Learning (RTRL) and biologically plausible learning rules for training Spiking Neural Networks (SNNs)
We motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity.
arXiv Detail & Related papers (2020-10-22T17:56:59Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.