Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need?
- URL: http://arxiv.org/abs/2112.12345v1
- Date: Thu, 23 Dec 2021 03:52:33 GMT
- Title: Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need?
- Authors: Ziwei Zhang, Xin Wang, Zeyang Zhang, Peng Cui, Wenwu Zhu
- Abstract summary: We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
- Score: 80.86819657126041
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Geometric deep learning, i.e., designing neural networks to handle the
ubiquitous geometric data such as point clouds and graphs, have achieved great
successes in the last decade. One critical inductive bias is that the model can
maintain invariance towards various transformations such as translation,
rotation, and scaling. The existing graph neural network (GNN) approaches can
only maintain permutation-invariance, failing to guarantee invariance with
respect to other transformations. Besides GNNs, other works design
sophisticated transformation-invariant layers, which are computationally
expensive and difficult to be extended. To solve this problem, we revisit why
the existing neural networks cannot maintain transformation invariance when
handling geometric data. Our findings show that transformation-invariant and
distance-preserving initial representations are sufficient to achieve
transformation invariance rather than needing sophisticated neural layer
designs. Motivated by these findings, we propose Transformation Invariant
Neural Networks (TinvNN), a straightforward and general framework for geometric
data. Specifically, we realize transformation-invariant and distance-preserving
initial point representations by modifying multi-dimensional scaling before
feeding the representations into neural networks. We prove that TinvNN can
strictly guarantee transformation invariance, being general and flexible enough
to be combined with the existing neural networks. Extensive experimental
results on point cloud analysis and combinatorial optimization demonstrate the
effectiveness and general applicability of our proposed method. Based on the
experimental results, we advocate that TinvNN should be considered a new
starting point and an essential baseline for further studies of
transformation-invariant geometric deep learning.
Related papers
- Deep Learning as Ricci Flow [38.27936710747996]
Deep neural networks (DNNs) are powerful tools for approximating the distribution of complex data.
We show that the transformations performed by DNNs during classification tasks have parallels to those expected under Hamilton's Ricci flow.
Our findings motivate the use of tools from differential and discrete geometry to the problem of explainability in deep learning.
arXiv Detail & Related papers (2024-04-22T15:12:47Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Affine Invariance in Continuous-Domain Convolutional Neural Networks [6.019182604573028]
This research studies affine invariance on continuous-domain convolutional neural networks.
We introduce a new criterion to assess the similarity of two input signals under affine transformations.
Our research could eventually extend the scope of geometrical transformations that practical deep-learning pipelines can handle.
arXiv Detail & Related papers (2023-11-13T14:17:57Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Learning Invariant Representations for Equivariant Neural Networks Using
Orthogonal Moments [9.680414207552722]
The convolutional layers of standard convolutional neural networks (CNNs) are equivariant to translation.
Recently, a new class of CNNs is proposed in which the conventional layers of CNNs are replaced with equivariant convolution, pooling, and batch-normalization layers.
arXiv Detail & Related papers (2022-09-22T11:48:39Z) - Leveraging Equivariant Features for Absolute Pose Regression [9.30597356471664]
We show that a translation and rotation equivariant Convolutional Neural Network directly induces representations of camera motions into the feature space.
We then show that this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations.
arXiv Detail & Related papers (2022-04-05T12:44:20Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.