Principled Simplicial Neural Networks for Trajectory Prediction
- URL: http://arxiv.org/abs/2102.10058v1
- Date: Fri, 19 Feb 2021 17:37:43 GMT
- Title: Principled Simplicial Neural Networks for Trajectory Prediction
- Authors: Nicholas Glaze, T. Mitchell Roddenberry, Santiago Segarra
- Abstract summary: We consider the construction of neural network architectures for data on simplicial complexes.
Based on these properties, we propose a simple convolutional architecture for the problem of trajectory prediction.
We show that it obeys all three of these properties when an odd, nonlinear activation function is used.
- Score: 17.016397531234393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the construction of neural network architectures for data on
simplicial complexes. In studying maps on the chain complex of a simplicial
complex, we define three desirable properties of a simplicial neural network
architecture: namely, permutation equivariance, orientation equivariance, and
simplicial awareness. The first two properties respectively account for the
fact that the node indexing and the simplex orientations in a simplicial
complex are arbitrary. The last property encodes the desirable feature that the
output of the neural network depends on the entire simplicial complex and not
on a subset of its dimensions. Based on these properties, we propose a simple
convolutional architecture, rooted in tools from algebraic topology, for the
problem of trajectory prediction, and show that it obeys all three of these
properties when an odd, nonlinear activation function is used. We then
demonstrate the effectiveness of this architecture in extrapolating
trajectories on synthetic and real datasets, with particular emphasis on the
gains in generalizability to unseen trajectories.
Related papers
- Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - On Characterizing the Evolution of Embedding Space of Neural Networks
using Algebraic Topology [9.537910170141467]
We study how the topology of feature embedding space changes as it passes through the layers of a well-trained deep neural network (DNN) through Betti numbers.
We demonstrate that as depth increases, a topologically complicated dataset is transformed into a simple one, resulting in Betti numbers attaining their lowest possible value.
arXiv Detail & Related papers (2023-11-08T10:45:12Z) - Generalized Simplicial Attention Neural Networks [22.171364354867723]
We introduce Generalized Simplicial Attention Neural Networks (GSANs)
GSANs process data living on simplicial complexes using masked self-attentional layers.
These schemes learn how to combine data associated with neighbor simplices of consecutive order in a task-oriented fashion.
arXiv Detail & Related papers (2023-09-05T11:29:25Z) - Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - Simplicial Attention Networks [0.0]
We introduce a proper self-attention mechanism able to process data components at different layers.
We learn how to weight both upper and lower neighborhoods of the given topological domain in a totally task-oriented fashion.
The proposed approach compares favorably with other methods when applied to different (inductive and transductive) tasks.
arXiv Detail & Related papers (2022-03-14T20:47:31Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Vector Neurons: A General Framework for SO(3)-Equivariant Networks [32.81671803104126]
In this paper, we introduce a general framework built on top of what we call Vector Neuron representations.
Our vector neurons enable a simple mapping of SO(3) actions to latent spaces.
We also show for the first time a rotation equivariant reconstruction network.
arXiv Detail & Related papers (2021-04-25T18:48:15Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.