Generalised Implicit Neural Representations
- URL: http://arxiv.org/abs/2205.15674v1
- Date: Tue, 31 May 2022 10:32:56 GMT
- Title: Generalised Implicit Neural Representations
- Authors: Daniele Grattarola, Pierre Vandergheynst
- Abstract summary: We consider the problem of learning implicit neural representations (INRs) for signals on non-Euclidean domains.
In the Euclidean case, INRs are trained on a discrete sampling of a signal over a regular lattice.
We show experiments with our method on various real-world signals on non-Euclidean domains.
- Score: 10.579386545934108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning implicit neural representations (INRs)
for signals on non-Euclidean domains. In the Euclidean case, INRs are trained
on a discrete sampling of a signal over a regular lattice. Here, we assume that
the continuous signal exists on some unknown topological space from which we
sample a discrete graph. In the absence of a coordinate system to identify the
sampled nodes, we propose approximating their location with a spectral
embedding of the graph. This allows us to train INRs without knowing the
underlying continuous domain, which is the case for most graph signals in
nature, while also making the INRs equivariant under the symmetry group of the
domain. We show experiments with our method on various real-world signals on
non-Euclidean domains.
Related papers
- Towards Inductive Robustness: Distilling and Fostering Wave-induced
Resonance in Transductive GCNs Against Graph Adversarial Attacks [56.56052273318443]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks, where slight perturbations in the graph structure can lead to erroneous predictions.
Here, we discover that transductive GCNs inherently possess a distillable robustness, achieved through a wave-induced resonance process.
We present Graph Resonance-fostering Network (GRN) to foster this resonance via learning node representations.
arXiv Detail & Related papers (2023-12-14T04:25:50Z) - Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations [111.27636893711055]
We study the filtering and stability properties of non commutative convolutional filters.
Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks.
arXiv Detail & Related papers (2023-10-05T20:27:22Z) - Implicit Neural Representations and the Algebra of Complex Wavelets [36.311212480600794]
Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains.
By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples and spectral features of the signal that is not obvious in the usual discrete representation.
arXiv Detail & Related papers (2023-10-01T02:01:28Z) - Signal Reconstruction from Samples at Unknown Locations with Application to 2D Unknown View Tomography [2.284915385433677]
We prove that reconstruction methods are resilient to a certain proportion of errors in the specification of the sample location ordering.
This is the first piece of work to perform such an analysis for 2D UVT and explicitly relate it to advances in sampling theory.
arXiv Detail & Related papers (2023-04-13T10:01:29Z) - Distributional Signals for Node Classification in Graph Neural Networks [36.30743671968087]
In graph neural networks (GNNs) both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP)
In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals.
We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks.
arXiv Detail & Related papers (2023-04-07T06:54:42Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Embedding Signals on Knowledge Graphs with Unbalanced Diffusion Earth
Mover's Distance [63.203951161394265]
In modern machine learning it is common to encounter large graphs that arise via interactions or similarities between observations in many domains.
We propose to compare and organize such datasets of graph signals by using an earth mover's distance (EMD) with a geodesic cost over the underlying graph.
In each case, we show that UDEMD-based embeddings find accurate distances that are highly efficient compared to other methods.
arXiv Detail & Related papers (2021-07-26T17:19:02Z) - Spectral-Spatial Global Graph Reasoning for Hyperspectral Image
Classification [50.899576891296235]
Convolutional neural networks have been widely applied to hyperspectral image classification.
Recent methods attempt to address this issue by performing graph convolutions on spatial topologies.
arXiv Detail & Related papers (2021-06-26T06:24:51Z) - Offline detection of change-points in the mean for stationary graph
signals [55.98760097296213]
We propose an offline method that relies on the concept of graph signal stationarity.
Our detector comes with a proof of a non-asymptotic inequality oracle.
arXiv Detail & Related papers (2020-06-18T15:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.