Combinatorial Optimization for Panoptic Segmentation: An End-to-End
Trainable Approach
- URL: http://arxiv.org/abs/2106.03188v1
- Date: Sun, 6 Jun 2021 17:39:13 GMT
- Title: Combinatorial Optimization for Panoptic Segmentation: An End-to-End
Trainable Approach
- Authors: Ahmed Abbas, Paul Swoboda
- Abstract summary: We propose an end-to-end trainable architecture for simultaneous semantic and instance segmentation.
Our approach shows the utility of using optimization in tandem with deep learning in a challenging large scale real-world problem.
- Score: 23.281726932718232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an end-to-end trainable architecture for simultaneous semantic and
instance segmentation (a.k.a. panoptic segmentation) consisting of a
convolutional neural network and an asymmetric multiway cut problem solver. The
latter solves a combinatorial optimization problem that elegantly incorporates
semantic and boundary predictions to produce a panoptic labeling. Our
formulation allows to directly maximize a smooth surrogate of the panoptic
quality metric by backpropagating the gradient through the optimization
problem. Experimental evaluation shows improvement of end-to-end learning
w.r.t. comparable approaches on Cityscapes and COCO datasets. Overall, our
approach shows the utility of using combinatorial optimization in tandem with
deep learning in a challenging large scale real-world problem and showcases
benefits and insights into training such an architecture end-to-end.
Related papers
- Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data [23.661713049508375]
We propose an algorithm that learns over a submanifold in the setting of a client.
We show that our proposed algorithm converges sub-ly to a neighborhood of a first-order optimal solution by using a novel analysis.
arXiv Detail & Related papers (2024-06-12T17:53:28Z) - SequentialAttention++ for Block Sparsification: Differentiable Pruning
Meets Combinatorial Optimization [24.55623897747344]
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, generalizable models.
We show how many existing differentiable pruning techniques can be understood as non regularization for group sparse optimization.
We propose SequentialAttention++, which advances state the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets.
arXiv Detail & Related papers (2024-02-27T21:42:18Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - End-to-End Learning for Stochastic Optimization: A Bayesian Perspective [9.356870107137093]
We develop a principled approach to end-to-end learning in optimization.
We show that the standard end-to-end learning algorithm admits a Bayesian interpretation and trains a posterior Bayes action map.
We then propose new end-to-end learning algorithms for training decision maps.
arXiv Detail & Related papers (2023-06-07T05:55:45Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - What can linear interpolation of neural network loss landscapes tell us? [11.753360538833139]
Loss landscapes are notoriously difficult to visualize in a human-comprehensible fashion.
One common way to address this problem is to plot linear slices of the landscape.
arXiv Detail & Related papers (2021-06-30T11:54:04Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.