Three iterations of $(d-1)$-WL test distinguish non isometric clouds of $d$-dimensional points
- URL: http://arxiv.org/abs/2303.12853v3
- Date: Sat, 19 Apr 2025 16:26:51 GMT
- Title: Three iterations of $(d-1)$-WL test distinguish non isometric clouds of $d$-dimensional points
- Authors: Valentino Delle Rose, Alexander Kozachinskiy, Cristóbal Rojas, Mircea Petrache, Pablo Barceló,
- Abstract summary: We study when the Weisfeiler--Lehman test is complete for clouds of euclidean points represented by complete distance graphs.<n>How many dimensions is enough to distinguish any two non-isometric point clouds in $d$-dimensional Euclidean space?<n>Our main result states that the $(d-1)$-dimensional WL test is complete for point clouds in $d$-dimensional Euclidean space, for any $dge 2$, and that only three iterations of the test suffice.
- Score: 45.15780579276503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Weisfeiler--Lehman (WL) test is a fundamental iterative algorithm for checking isomorphism of graphs. It has also been observed that it underlies the design of several graph neural network architectures, whose capabilities and performance can be understood in terms of the expressive power of this test. Motivated by recent developments in machine learning applications to datasets involving three-dimensional objects, we study when the WL test is {\em complete} for clouds of euclidean points represented by complete distance graphs, i.e., when it can distinguish, up to isometry, any arbitrary such cloud. %arbitrary clouds of euclidean points represented by complete distance graphs. % How many dimensions of the Weisfeiler--Lehman test is enough to distinguish any two non-isometric point clouds in $d$-dimensional Euclidean space, assuming that these point clouds are given as complete graphs labeled by distances between the points? This question is important for understanding, which architectures of graph neural networks are capable of fully exploiting the spacial structure of a point cloud. Our main result states that the $(d-1)$-dimensional WL test is complete for point clouds in $d$-dimensional Euclidean space, for any $d\ge 2$, and that only three iterations of the test suffice. We also observe that the $d$-dimensional WL test only requires one iteration to achieve completeness. Our paper thus provides complete understanding of the 3-dimensional case: it was shown in previous works that 1-WL is not complete in $\mathbb{R}^3$, and we show that 2-WL is complete there. We also strengthen the lower bound for 1-WL by showing that it is unable to recognize planar point clouds in $\mathbb{R}^3$. Finally, we show that 2-WL is not complete in $\mathbb{R}^6$, leaving as an open question, whether it is complete in $\mathbb{R}^{d}$ for $d = 4,5$.
Related papers
- Weisfeiler Leman for Euclidean Equivariant Machine Learning [3.0222726571099665]
We show that PPGN can simulate $2$-WL uniformly on all point clouds with low complexity.
Secondly, we show that $2$-WL tests can be extended to point clouds which include both positions and velocities.
arXiv Detail & Related papers (2024-02-04T13:25:18Z) - On the Power of the Weisfeiler-Leman Test for Graph Motif Parameters [9.599347633285637]
The $k$-dimensional Weisfeiler-Leman ($k$WL) test is a widely-recognized method for verifying graph isomorphism.
This paper provides a precise characterization of the WL-dimension of labeled graph motif parameters.
arXiv Detail & Related papers (2023-09-29T08:26:44Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Weisfeiler and Leman Go Measurement Modeling: Probing the Validity of the WL Test [58.1543112860658]
We uncover misalignments between graph machine learning practitioners' conceptualizations of expressive power and $k$-WL.
We argue for extensional definitions and measurement of expressive power based on benchmarks.
arXiv Detail & Related papers (2023-07-11T20:06:12Z) - Extending the Design Space of Graph Neural Networks by Rethinking
Folklore Weisfeiler-Lehman [66.23316415757456]
Message passing neural networks (MPNNs) have emerged as the most popular framework of graph neural networks (GNNs) in recent years.
However, their expressive power is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test.
We propose an extension, $(k,t)$-FWL, which considers any equivariant set as neighbors instead of all nodes.
N$2$-GNN achieves record-breaking results on ZINC-Subset (0.059), outperforming previous SOTA results by 10.6%.
arXiv Detail & Related papers (2023-06-05T21:35:32Z) - Point2Vec for Self-Supervised Representation Learning on Point Clouds [66.53955515020053]
We extend data2vec to the point cloud domain and report encouraging results on several downstream tasks.
We propose point2vec, which unleashes the full potential of data2vec-like pre-training on point clouds.
arXiv Detail & Related papers (2023-03-29T10:08:29Z) - Complete Neural Networks for Complete Euclidean Graphs [4.416503115535553]
We show that point clouds can be completely determined, up to permutation and rigid motion, by applying the 3-WL graph isomorphism test to the point cloud's centralized Gram matrix.
We then show how our complete Euclidean tests can be simulated by an Euclidean graph neural network of moderate size and demonstrate their separation capability on highly symmetrical point clouds.
arXiv Detail & Related papers (2023-01-31T18:07:26Z) - WL meet VC [9.468816647649104]
We study the expressive power of graph neural networks (GNNs) through the lens of Vapnik--Chervonenkis (VC) dimension theory.
We derive an upper bound for GNNs' VC dimension using the number of colors produced by the $1text-mathsfWL$ and GNNs' VC dimension.
arXiv Detail & Related papers (2023-01-26T11:29:36Z) - MATE: Masked Autoencoders are Online 3D Test-Time Learners [63.3907730920114]
MATE is the first Test-Time-Training (TTT) method designed for 3D data.
It makes deep networks trained for point cloud classification robust to distribution shifts occurring in test data.
arXiv Detail & Related papers (2022-11-21T13:19:08Z) - MetaSets: Meta-Learning on Point Sets for Generalizable Representations [100.5981809166658]
We study a new problem of 3D Domain Generalization (3DDG) with the goal to generalize the model to other unseen domains of point clouds without access to them in the training process.
We propose to tackle this problem via MetaSets, which meta-learns point cloud representations from a group of classification tasks on carefully-designed transformed point sets.
We design two benchmarks for Sim-to-Real transfer of 3D point clouds. Experimental results show that MetaSets outperforms existing 3D deep learning methods by large margins.
arXiv Detail & Related papers (2022-04-15T03:24:39Z) - On the emergence of tetrahedral symmetry in the final and penultimate
layers of neural network classifiers [9.975163460952045]
We show that even the final output of the classifier $h$ is not uniform over data samples from a class $C_i$ if $h$ is a shallow network.
We explain this observation analytically in toy models for highly expressive deep neural networks.
arXiv Detail & Related papers (2020-12-10T02:32:52Z) - PMP-Net: Point Cloud Completion by Learning Multi-step Point Moving
Paths [54.459879603473034]
We design a novel neural network, named PMP-Net, to mimic the behavior of an earth mover.
It moves each point of the incomplete input to complete the point cloud, where the total distance of point moving paths should be shortest.
It learns a strict and unique correspondence on point-level, which can capture the detailed topology and structure relationships between the incomplete shape and the complete target.
arXiv Detail & Related papers (2020-12-07T01:34:38Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.