Permutation Invariant Representations with Applications to Graph Deep
Learning
- URL: http://arxiv.org/abs/2203.07546v1
- Date: Mon, 14 Mar 2022 23:13:59 GMT
- Title: Permutation Invariant Representations with Applications to Graph Deep
Learning
- Authors: Radu Balan, Naveed Haghani, Maneesh Singh
- Abstract summary: This paper presents two Euclidean embeddings of quotient space generated by matrices that are identified modulo arbitrary row permutations.
An almost everywhere injective scheme can be implemented with minimal redundancy and low computational cost.
- Score: 8.403227482145297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents primarily two Euclidean embeddings of the quotient space
generated by matrices that are identified modulo arbitrary row permutations.
The original application is in deep learning on graphs where the learning task
is invariant to node relabeling. Two embedding schemes are introduced, one
based on sorting and the other based on algebras of multivariate polynomials.
While both embeddings exhibit a computational complexity exponential in problem
size, the sorting based embedding is globally bi-Lipschitz and admits a low
dimensional target space. Additionally, an almost everywhere injective scheme
can be implemented with minimal redundancy and low computational cost. In turn,
this proves that almost any classifier can be implemented with an arbitrary
small loss of performance. Numerical experiments are carried out on two data
sets, a chemical compound data set (QM9) and a proteins data set (PROTEINS).
Related papers
- From Exponential to Polynomial Complexity: Efficient Permutation Counting with Subword Constraints [0.0]
Counting distinct permutations with replacement, especially when involving multiple subwords, is a longstanding challenge in analysis.
This paper introduces a novel framework that presents closed-form formulas for calculating distinct permutations with replacement.
We then extend our foundational formula to handle multiple subwords through the development of an additional formula.
arXiv Detail & Related papers (2024-11-23T19:52:11Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Covering Number of Real Algebraic Varieties and Beyond: Improved Bounds and Applications [8.438718130535296]
We prove upper bounds on the covering number of sets in Euclidean space.
We show that bounds improve the best known general bound by Yomdin-Comte.
We illustrate the power of the result on three computational applications.
arXiv Detail & Related papers (2023-11-09T03:06:59Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - Fast computation of permutation equivariant layers with the partition
algebra [0.0]
Linear neural network layers that are either equivariant or invariant to permutations of their inputs form core building blocks of modern deep learning architectures.
Examples include the layers of DeepSets, as well as linear layers occurring in attention blocks of transformers and some graph neural networks.
arXiv Detail & Related papers (2023-03-10T21:13:12Z) - Combining Varied Learners for Binary Classification using Stacked
Generalization [3.1871776847712523]
This paper performs binary classification using Stacked Generalization on high dimensional Polycystic Ovary Syndrome dataset.
The various metrics are given in this paper that also point out a subtle transgression found with Receiver Operating Characteristic Curve that was proved to be incorrect.
arXiv Detail & Related papers (2022-02-17T21:47:52Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - Solution Path Algorithm for Twin Multi-class Support Vector Machine [6.97711662470035]
The paper is devoted to the fast regularization parameter tuning algorithm for the twin multi-class support vector machine.
A new sample dataset division method is adopted and the Lagrangian multipliers are proved to be piecewise linear.
The proposed method can achieve good classification performance with reducing the computational cost of grid search method from exponential level to the constant level.
arXiv Detail & Related papers (2020-05-30T14:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.