Learning Invariances with Generalised Input-Convex Neural Networks
- URL: http://arxiv.org/abs/2204.07009v1
- Date: Thu, 14 Apr 2022 15:03:30 GMT
- Title: Learning Invariances with Generalised Input-Convex Neural Networks
- Authors: Vitali Nesterov, Fabricio Arend Torres, Monika Nagy-Huber, Maxim
Samarin, Volker Roth
- Abstract summary: We introduce a novel class of flexible neural networks that represent functions that are guaranteed to have connected level sets forming smooth networks.
We show that our novel technique for characterising invariances is a powerful generative data exploration tool in real-world applications, such as computational chemistry.
- Score: 3.5611181253285253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Considering smooth mappings from input vectors to continuous targets, our
goal is to characterise subspaces of the input domain, which are invariant
under such mappings. Thus, we want to characterise manifolds implicitly defined
by level sets. Specifically, this characterisation should be of a global
parametric form, which is especially useful for different informed data
exploration tasks, such as building grid-based approximations, sampling points
along the level curves, or finding trajectories on the manifold. However,
global parameterisations can only exist if the level sets are connected. For
this purpose, we introduce a novel and flexible class of neural networks that
generalise input-convex networks. These networks represent functions that are
guaranteed to have connected level sets forming smooth manifolds on the input
space. We further show that global parameterisations of these level sets can be
always found efficiently. Lastly, we demonstrate that our novel technique for
characterising invariances is a powerful generative data exploration tool in
real-world applications, such as computational chemistry.
Related papers
- A rank decomposition for the topological classification of neural representations [0.0]
In this work, we leverage the fact that neural networks are equivalent to continuous piecewise-affine maps.
We study the homology groups of the quotient of a manifold $mathcalM$ and a subset $A$, assuming some minimal properties on these spaces.
We show that in randomly narrow networks, there will be regions in which the (co)homology groups of a data manifold can change.
arXiv Detail & Related papers (2024-04-30T17:01:20Z) - On permutation-invariant neural networks [8.633259015417993]
The emergence of neural network architectures such as Deep Sets and Transformers has presented a significant advancement in the treatment of set-based data.
This comprehensive survey aims to provide an overview of the diverse problem settings and ongoing research efforts pertaining to neural networks that approximate set functions.
arXiv Detail & Related papers (2024-03-26T06:06:01Z) - FUNCK: Information Funnels and Bottlenecks for Invariant Representation
Learning [7.804994311050265]
We investigate a set of related information funnels and bottleneck problems that claim to learn invariant representations from the data.
We propose a new element to this family of information-theoretic objectives: The Conditional Privacy Funnel with Side Information.
Given the generally intractable objectives, we derive tractable approximations using amortized variational inference parameterized by neural networks.
arXiv Detail & Related papers (2022-11-02T19:37:55Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Learning Prototype-oriented Set Representations for Meta-Learning [85.19407183975802]
Learning from set-structured data is a fundamental problem that has recently attracted increasing attention.
This paper provides a novel optimal transport based way to improve existing summary networks.
We further instantiate it to the cases of few-shot classification and implicit meta generative modeling.
arXiv Detail & Related papers (2021-10-18T09:49:05Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z) - Localized convolutional neural networks for geospatial wind forecasting [0.0]
Convolutional Neural Networks (CNN) possess positive qualities when it comes to many spatial data.
In this work, we propose localized convolutional neural networks that enable CNNs to learn local features in addition to the global ones.
They can be added to any convolutional layers, easily end-to-end trained, introduce minimal additional complexity, and let CNNs retain most of their benefits to the extent that they are needed.
arXiv Detail & Related papers (2020-05-12T17:14:49Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.