Joint Continuous and Discrete Model Selection via Submodularity
- URL: http://arxiv.org/abs/2102.09029v1
- Date: Wed, 17 Feb 2021 21:14:47 GMT
- Title: Joint Continuous and Discrete Model Selection via Submodularity
- Authors: Jonathan Bunton and Paulo Tabuada
- Abstract summary: In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem.
In many scenarios, however, numerically meaningful structure is specified in some discrete space leading to difficult non optimization problems.
We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization.
- Score: 1.332560004325655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In model selection problems for machine learning, the desire for a
well-performing model with meaningful structure is typically expressed through
a regularized optimization problem. In many scenarios, however, the meaningful
structure is specified in some discrete space, leading to difficult nonconvex
optimization problems. In this paper, we relate the model selection problem
with structure-promoting regularizers to submodular function minimization
defined with continuous and discrete arguments. In particular, we leverage
submodularity theory to identify a class of these problems that can be solved
exactly and efficiently with an agnostic combination of discrete and continuous
optimization routines. We show how simple continuous or discrete constraints
can also be handled for certain problem classes, motivated by robust
optimization. Finally, we numerically validate our theoretical results with
several proof-of-concept examples, comparing against state-of-the-art
algorithms.
Related papers
- Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Discrete-Continuous Smoothing and Mapping [8.90077503980675]
We describe a general approach to smoothing and mapping with a class of discrete-continuous factor graphs commonly encountered in robotics applications.
We provide a library, DC-SAM, extending existing tools for optimization problems defined in terms of factor graphs to the setting of discrete-continuous models.
arXiv Detail & Related papers (2022-04-25T19:31:44Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Continuous surrogate-based optimization algorithms are well-suited for
expensive discrete problems [9.655888042539495]
We present empirical evidence showing that the use of continuous surrogate models displays competitive performance against state-of-the-art discrete surrogate-based methods.
Our experiments on different discrete structures and time constraints also give more insight into which algorithms work well on which type of problem.
arXiv Detail & Related papers (2020-11-06T15:27:45Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Adaptive Discretization for Model-Based Reinforcement Learning [10.21634042036049]
We introduce the technique of adaptive discretization to design an efficient model-based episodic reinforcement learning algorithm.
Our algorithm is based on optimistic one-step value iteration extended to maintain an adaptive discretization of the space.
arXiv Detail & Related papers (2020-07-01T19:36:46Z) - An Optimal Control Theory for the Traveling Salesman Problem and Its
Variants [0.0]
We show that the traveling salesman problem (TSP) and its many variants may be modeled as functional optimization problems over a graph.
The main advantage of the new approach is that it facilitates the modeling of certain application-specific problems in their home space of measurable functions.
arXiv Detail & Related papers (2020-05-07T00:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.