Discretization Invariant Networks for Learning Maps between Neural
Fields
- URL: http://arxiv.org/abs/2206.01178v4
- Date: Thu, 19 Oct 2023 19:55:39 GMT
- Title: Discretization Invariant Networks for Learning Maps between Neural
Fields
- Authors: Clinton J. Wang and Polina Golland
- Abstract summary: We present a new framework for understanding and designing discretization invariant neural networks (DI-Nets)
Our analysis establishes upper bounds on the deviation in model outputs under different finite discretizations.
We prove by construction that DI-Nets universally approximate a large class of maps between integrable function spaces.
- Score: 3.09125960098955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the emergence of powerful representations of continuous data in the form
of neural fields, there is a need for discretization invariant learning: an
approach for learning maps between functions on continuous domains without
being sensitive to how the function is sampled. We present a new framework for
understanding and designing discretization invariant neural networks (DI-Nets),
which generalizes many discrete networks such as convolutional neural networks
as well as continuous networks such as neural operators. Our analysis
establishes upper bounds on the deviation in model outputs under different
finite discretizations, and highlights the central role of point set
discrepancy in characterizing such bounds. This insight leads to the design of
a family of neural networks driven by numerical integration via quasi-Monte
Carlo sampling with discretizations of low discrepancy. We prove by
construction that DI-Nets universally approximate a large class of maps between
integrable function spaces, and show that discretization invariance also
describes backpropagation through such models. Applied to neural fields,
convolutional DI-Nets can learn to classify and segment visual data under
various discretizations, and sometimes generalize to new types of
discretizations at test time. Code: https://github.com/clintonjwang/DI-net.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis [7.742297876120561]
We introduce the k*distribution and its corresponding visualization technique.
This method uses local neighborhood analysis to guarantee the preservation of the structure of sample distributions.
Experiments show that the distribution of samples within the network's learned latent space significantly varies depending on the class.
arXiv Detail & Related papers (2023-12-07T03:42:48Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Mean-field neural networks: learning mappings on Wasserstein space [0.0]
We study the machine learning task for models with operators mapping between the Wasserstein space of probability measures and a space of functions.
Two classes of neural networks are proposed to learn so-called mean-field functions.
We present different algorithms relying on mean-field neural networks for solving time-dependent mean-field problems.
arXiv Detail & Related papers (2022-10-27T05:11:42Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Persistent Homology Captures the Generalization of Neural Networks
Without A Validation Set [0.0]
We suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology.
Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process.
Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy.
arXiv Detail & Related papers (2021-05-31T09:17:31Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z) - Mean-Field and Kinetic Descriptions of Neural Differential Equations [0.0]
In this work we focus on a particular class of neural networks, i.e. the residual neural networks.
We analyze steady states and sensitivity with respect to the parameters of the network, namely the weights and the bias.
A modification of the microscopic dynamics, inspired by residual neural networks, leads to a Fokker-Planck formulation of the network.
arXiv Detail & Related papers (2020-01-07T13:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.