Neural Set Function Extensions: Learning with Discrete Functions in High
Dimensions
- URL: http://arxiv.org/abs/2208.04055v1
- Date: Mon, 8 Aug 2022 10:58:02 GMT
- Title: Neural Set Function Extensions: Learning with Discrete Functions in High
Dimensions
- Authors: Nikolaos Karalias, Joshua Robinson, Andreas Loukas, Stefanie Jegelka
- Abstract summary: We develop a framework for extending set functions onto low-dimensional continuous domains.
Our framework subsumes many well-known extensions as special cases.
We convert low-dimensional neural network bottlenecks into representations in high-dimensional spaces.
- Score: 63.21838830509772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrating functions on discrete domains into neural networks is key to
developing their capability to reason about discrete objects. But, discrete
domains are (1) not naturally amenable to gradient-based optimization, and (2)
incompatible with deep learning architectures that rely on representations in
high-dimensional vector spaces. In this work, we address both difficulties for
set functions, which capture many important discrete problems. First, we
develop a framework for extending set functions onto low-dimensional continuous
domains, where many extensions are naturally defined. Our framework subsumes
many well-known extensions as special cases. Second, to avoid undesirable
low-dimensional neural network bottlenecks, we convert low-dimensional
extensions into representations in high-dimensional spaces, taking inspiration
from the success of semidefinite programs for combinatorial optimization.
Empirically, we observe benefits of our extensions for unsupervised neural
combinatorial optimization, in particular with high-dimensional
representations.
Related papers
- Embedding Knowledge Graph in Function Spaces [1.90894751866253]
We introduce a novel embedding method diverging from conventional approaches by operating within function spaces of finite dimension rather than vector space.
We argue that employing functions for embedding allows for more degrees of freedom, enabling operations such as composition, derivatives, primitive entities representation.
arXiv Detail & Related papers (2024-09-23T09:49:57Z) - Stochastic Optimal Control for Diffusion Bridges in Function Spaces [13.544676987441441]
We present a theory of optimal control tailored to infinite-dimensional spaces.
We show how Doob's $h$-transform can be derived from the SOC perspective and expanded to infinite dimensions.
We propose two applications: learning bridges between two infinite-dimensional distributions and generative models for sampling from an infinite-dimensional distribution.
arXiv Detail & Related papers (2024-05-31T05:42:47Z) - Spectral Algorithms on Manifolds through Diffusion [1.7227952883644062]
We study the convergence performance of spectral algorithms in the Reproducing Kernel Space.
We employ integral operator techniques to derive tight convergence upper bounds concerning generalized norms.
Our study confirms that the spectral algorithms are practically significant in the broader context of high-dimensional approximation.
arXiv Detail & Related papers (2024-03-06T12:43:53Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Deep neural network approximation of composite functions without the
curse of dimensionality [0.0]
In this article we identify a class of high-dimensional continuous functions that can be approximated by deep neural networks (DNNs)
The functions in our class can be expressed as a potentially unbounded special functions which include products, maxima, and certain parallelized Lipschitz continuous functions.
arXiv Detail & Related papers (2023-04-12T12:08:59Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Benefits of Overparameterized Convolutional Residual Networks: Function
Approximation under Smoothness Constraint [48.25573695787407]
We prove that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient first-order smoothness.
Our theory partially justifies the benefits of using deep and wide networks in practice.
arXiv Detail & Related papers (2022-06-09T15:35:22Z) - Pure Exploration in Kernel and Neural Bandits [90.23165420559664]
We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
arXiv Detail & Related papers (2021-06-22T19:51:59Z) - Universal Online Convex Optimization Meets Second-order Bounds [74.0120666722487]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses.
In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds.
arXiv Detail & Related papers (2021-05-08T11:43:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.