Length Learning for Planar Euclidean Curves
- URL: http://arxiv.org/abs/2102.01895v1
- Date: Wed, 3 Feb 2021 06:30:03 GMT
- Title: Length Learning for Planar Euclidean Curves
- Authors: Barak Or and Liam Hazan
- Abstract summary: This work focuses on learning the length of planar sampled curves created by a sine waves dataset.
The robustness to additive noise and discretization errors were tested.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we used deep neural networks (DNNs) to solve a fundamental
problem in differential geometry. One can find many closed-form expressions for
calculating curvature, length, and other geometric properties in the
literature. As we know these concepts, we are highly motivated to reconstruct
them by using deep neural networks. In this framework, our goal is to learn
geometric properties from examples. The simplest geometric object is a curve.
Therefore, this work focuses on learning the length of planar sampled curves
created by a sine waves dataset. For this reason, the fundamental length axioms
were reconstructed using a supervised learning approach. Following these axioms
a simplified DNN model, we call ArcLengthNet, was established. The robustness
to additive noise and discretization errors were tested.
Related papers
- Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture [22.225213114532533]
We argue that when training a neural network, the input space curvature remains invariant under transformation determined by its architecture.
We show that in cases where the average geometry is low-rank (such as in a ResNet), the geometry only changes in a subset of the input space.
arXiv Detail & Related papers (2024-10-15T19:46:09Z) - DeepRicci: Self-supervised Graph Structure-Feature Co-Refinement for
Alleviating Over-squashing [72.70197960100677]
Graph Structure Learning (GSL) plays an important role in boosting Graph Neural Networks (GNNs) with a refined graph.
GSL solutions usually focus on structure refinement with task-specific supervision (i.e., node classification) or overlook the inherent weakness of GNNs themselves.
We propose to study self-supervised graph structure-feature co-refinement for effectively alleviating the issue of over-squashing in typical GNNs.
arXiv Detail & Related papers (2024-01-23T14:06:08Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity [54.01594785269913]
We show that optimal weights of deep ReLU neural networks are given by the wedge product of training samples when trained with standard regularized loss.
The training problem reduces to convex optimization over wedge product features, which encode the geometric structure of the training dataset.
arXiv Detail & Related papers (2023-09-28T15:19:30Z) - Exploring Data Geometry for Continual Learning [64.4358878435983]
We study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data.
Our method dynamically expands the geometry of the underlying space to match growing geometric structures induced by new data.
Experiments show that our method achieves better performance than baseline methods designed in Euclidean space.
arXiv Detail & Related papers (2023-04-08T06:35:25Z) - Neural networks learn to magnify areas near decision boundaries [32.84188052937496]
We study how training shapes the geometry induced by unconstrained neural network feature maps.
We first show that at infinite width, neural networks with random parameters induce highly symmetric metrics on input space.
This symmetry is broken by feature learning: networks trained to perform classification tasks learn to magnify local areas along decision boundaries.
arXiv Detail & Related papers (2023-01-26T19:43:16Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Sheaf Neural Networks with Connection Laplacians [3.3414557160889076]
Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that operates on a sheaf, an object that equips a graph with vector spaces over its nodes and edges and linear maps between these spaces.
Previous works proposed two diametrically opposed approaches: manually constructing the sheaf based on domain knowledge and learning the sheaf end-to-end using gradient-based methods.
In this work, we propose a novel way of computing sheaves drawing inspiration from Riemannian geometry.
We show that this approach achieves promising results with less computational overhead when compared to previous SNN models.
arXiv Detail & Related papers (2022-06-17T11:39:52Z) - Differential Geometry in Neural Implicits [0.6198237241838558]
We introduce a neural implicit framework that bridges discrete differential geometry of triangle meshes and continuous differential geometry of neural implicit surfaces.
It exploits the differentiable properties of neural networks and the discrete geometry of triangle meshes to approximate them as the zero-level sets of neural implicit functions.
arXiv Detail & Related papers (2022-01-23T13:40:45Z) - The Geometric Occam's Razor Implicit in Deep Learning [7.056824589733872]
We show that neural networks trained with gradient descent are implicitly regularized by a Geometric Occam's Razor.
For one-dimensional regression, the geometric model complexity is simply given by the arc length of the function.
For higher-dimensional settings, the geometric model complexity depends on the Dirichlet energy of the function.
arXiv Detail & Related papers (2021-11-30T03:05:11Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.