Voronoi Convolutional Neural Networks
- URL: http://arxiv.org/abs/2010.11339v1
- Date: Wed, 21 Oct 2020 22:42:19 GMT
- Title: Voronoi Convolutional Neural Networks
- Authors: Soroosh Yazdani and Andrea Tagliasacchi
- Abstract summary: We show that by treating the samples as the average of a function within a cell, we can find a natural equivalent of most layers used in CNN.
We also present an algorithm for running inference for these models exactly using standard convex geometry algorithms.
- Score: 22.793216189458402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this technical report, we investigate extending convolutional neural
networks to the setting where functions are not sampled in a grid pattern. We
show that by treating the samples as the average of a function within a cell,
we can find a natural equivalent of most layers used in CNN. We also present an
algorithm for running inference for these models exactly using standard convex
geometry algorithms.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity [54.01594785269913]
We show that optimal weights of deep ReLU neural networks are given by the wedge product of training samples when trained with standard regularized loss.
The training problem reduces to convex optimization over wedge product features, which encode the geometric structure of the training dataset.
arXiv Detail & Related papers (2023-09-28T15:19:30Z) - A max-affine spline approximation of neural networks using the Legendre
transform of a convex-concave representation [0.3007949058551534]
This work presents a novel algorithm for transforming a neural network into a spline representation.
The only constraint is that the function be bounded and possess a well-define second derivative.
It can also be performed over the whole network rather than on each layer independently.
arXiv Detail & Related papers (2023-07-16T17:01:20Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - A Derivation of Feedforward Neural Network Gradients Using Fr\'echet
Calculus [0.0]
We show a derivation of the gradients of feedforward neural networks using Fr'teche calculus.
We show how our analysis generalizes to more general neural network architectures including, but not limited to, convolutional networks.
arXiv Detail & Related papers (2022-09-27T08:14:00Z) - Lattice gauge equivariant convolutional neural networks [0.0]
We propose Lattice gauge equivariant Convolutional Neural Networks (L-CNNs) for generic machine learning applications.
We show that L-CNNs can learn and generalize gauge invariant quantities that traditional convolutional neural networks are incapable of finding.
arXiv Detail & Related papers (2020-12-23T19:00:01Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - A function space analysis of finite neural networks with insights from
sampling theory [41.07083436560303]
We show that the function space generated by multi-layer networks with non-expansive activation functions is smooth.
Under the assumption that the input is band-limited, we provide novel error bounds.
We analyze both deterministic uniform and random sampling showing the advantage of the former.
arXiv Detail & Related papers (2020-04-15T10:25:18Z) - Linearly Constrained Neural Networks [0.5735035463793007]
We present a novel approach to modelling and learning vector fields from physical systems using neural networks.
To achieve this, the target function is modelled as a linear transformation of an underlying potential field, which is in turn modelled by a neural network.
arXiv Detail & Related papers (2020-02-05T01:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.