Combining Optimal Path Search With Task-Dependent Learning in a Neural
Network
- URL: http://arxiv.org/abs/2201.11104v6
- Date: Thu, 2 Nov 2023 09:14:45 GMT
- Title: Combining Optimal Path Search With Task-Dependent Learning in a Neural
Network
- Authors: Tomas Kulvicius, Minija Tamosiunaite and Florentin W\"org\"otter
- Abstract summary: We show that one can define a neural network representation of path finding problems by transforming cost values into synaptic weights.
We show that network learning mechanisms can adapt the weights in the network augmenting the resulting paths according to some task at hand.
- Score: 4.1712273169097305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding optimal paths in connected graphs requires determining the smallest
total cost for traveling along the graph's edges. This problem can be solved by
several classical algorithms where, usually, costs are predefined for all
edges. Conventional planning methods can, thus, normally not be used when
wanting to change costs in an adaptive way following the requirements of some
task. Here we show that one can define a neural network representation of path
finding problems by transforming cost values into synaptic weights, which
allows for online weight adaptation using network learning mechanisms. When
starting with an initial activity value of one, activity propagation in this
network will lead to solutions, which are identical to those found by the
Bellman-Ford algorithm. The neural network has the same algorithmic complexity
as Bellman-Ford and, in addition, we can show that network learning mechanisms
(such as Hebbian learning) can adapt the weights in the network augmenting the
resulting paths according to some task at hand. We demonstrate this by learning
to navigate in an environment with obstacles as well as by learning to follow
certain sequences of path nodes. Hence, the here-presented novel algorithm may
open up a different regime of applications where path-augmentation (by
learning) is directly coupled with path finding in a natural way.
Related papers
- The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Pathfinding Neural Cellular Automata [23.831530224401575]
Pathfinding is an important sub-component of a broad range of complex AI tasks, such as robot path planning, transport routing, and game playing.
We hand-code and learn models for Breadth-First Search (BFS), i.e. shortest path finding.
We present a neural implementation of Depth-First Search (DFS), and outline how it can be combined with neural BFS to produce an NCA for computing diameter of a graph.
We experiment with architectural modifications inspired by these hand-coded NCAs, training networks from scratch to solve the diameter problem on grid mazes while exhibiting strong ability generalization
arXiv Detail & Related papers (2023-01-17T11:45:51Z) - Learning with Differentiable Algorithms [6.47243430672461]
This thesis explores combining classic algorithms and machine learning systems like neural networks.
The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm.
In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable sorting gates, and differentiable logic gate networks.
arXiv Detail & Related papers (2022-09-01T17:30:00Z) - Increasing Depth of Neural Networks for Life-long Learning [2.0305676256390934]
We propose a novel method for continual learning based on the increasing depth of neural networks.
This work explores whether extending neural network depth may be beneficial in a life-long learning setting.
arXiv Detail & Related papers (2022-02-22T11:21:41Z) - Multi-layered Network Exploration via Random Walks: From Offline
Optimization to Online Learning [33.450042829375086]
Multi-layered network exploration (MuLaNE) problem is an important problem abstracted from many applications.
In MuLaNE, there are multiple network layers where each node has an importance weight and each layer is explored by a random walk.
We systematically study this problem from offline optimization to online learning.
arXiv Detail & Related papers (2021-06-09T13:35:39Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Universality of Gradient Descent Neural Network Training [0.0]
We discuss the question if it is always possible to redesign a neural network so that it trains well with gradient descent.
The construction is not intended for practical computations, but it provides some orientation on the possibilities of meta-learning and related approaches.
arXiv Detail & Related papers (2020-07-27T16:17:19Z) - Side-Tuning: A Baseline for Network Adaptation via Additive Side
Networks [95.51368472949308]
Adaptation can be useful in cases when training data is scarce, or when one wishes to encode priors in the network.
In this paper, we propose a straightforward alternative: side-tuning.
arXiv Detail & Related papers (2019-12-31T18:52:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.