Optimization Networks for Integrated Machine Learning
- URL: http://arxiv.org/abs/2110.00415v1
- Date: Wed, 1 Sep 2021 08:25:01 GMT
- Title: Optimization Networks for Integrated Machine Learning
- Authors: Michael Kommenda, Johannes Karder, Andreas Beham, Bogdan Burlacu,
Gabriel Kronberger, Stefan Wagner, Michael Affenzeller
- Abstract summary: We revisit the core principles of optimization networks and demonstrate their suitability for solving machine learning problems.
We justify the advantages of optimization networks by adapting the network to solve other machine learning problems.
- Score: 4.279210021862033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization networks are a new methodology for holistically solving
interrelated problems that have been developed with combinatorial optimization
problems in mind. In this contribution we revisit the core principles of
optimization networks and demonstrate their suitability for solving machine
learning problems. We use feature selection in combination with linear model
creation as a benchmark application and compare the results of optimization
networks to ordinary least squares with optional elastic net regularization.
Based on this example we justify the advantages of optimization networks by
adapting the network to solve other machine learning problems. Finally,
optimization analysis is presented, where optimal input values of a system have
to be found to achieve desired output values. Optimization analysis can be
divided into three subproblems: model creation to describe the system, model
selection to choose the most appropriate one and parameter optimization to
obtain the input values. Therefore, optimization networks are an obvious choice
for handling optimization analysis tasks.
Related papers
- Self-Supervised Learning of Iterative Solvers for Constrained Optimization [0.0]
We propose a learning-based iterative solver for constrained optimization.
It can obtain very fast and accurate solutions by customizing the solver to a specific parametric optimization problem.
A novel loss function based on the Karush-Kuhn-Tucker conditions of optimality is introduced, enabling fully self-supervised training of both neural networks.
arXiv Detail & Related papers (2024-09-12T14:17:23Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Neural Networks for Generating Better Local Optima in Topology Optimization [0.4543820534430522]
We show how neural network material discretizations can, under certain conditions, find better local optima in more challenging optimization problems.
We emphasize that the neural network material discretization's advantage comes from the interplay with its current limitations.
arXiv Detail & Related papers (2024-07-25T11:24:44Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.