Search for Concepts: Discovering Visual Concepts Using Direct
Optimization
- URL: http://arxiv.org/abs/2210.14808v1
- Date: Tue, 25 Oct 2022 15:55:24 GMT
- Title: Search for Concepts: Discovering Visual Concepts Using Direct
Optimization
- Authors: Pradyumna Reddy, Paul Guerrero, Niloy J. Mitra
- Abstract summary: We show that using direct optimization is more generalizable, misses fewer correct decompositions, and typically requires less data than methods based on amortized inference.
This highlights a weakness of the current prevalent practice of using amortized inference that can potentially be improved by integrating more direct optimization elements.
- Score: 48.51514897866221
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding an unsupervised decomposition of an image into individual objects is
a key step to leverage compositionality and to perform symbolic reasoning.
Traditionally, this problem is solved using amortized inference, which does not
generalize beyond the scope of the training data, may sometimes miss correct
decompositions, and requires large amounts of training data. We propose finding
a decomposition using direct, unamortized optimization, via a combination of a
gradient-based optimization for differentiable object properties and global
search for non-differentiable properties. We show that using direct
optimization is more generalizable, misses fewer correct decompositions, and
typically requires less data than methods based on amortized inference. This
highlights a weakness of the current prevalent practice of using amortized
inference that can potentially be improved by integrating more direct
optimization elements.
Related papers
- Recommendations from Sparse Comparison Data: Provably Fast Convergence for Nonconvex Matrix Factorization [12.006706388840934]
This paper provides a theoretical analysis of a new learning problem for recommender systems where users provide feedback by comparing pairs of items instead of rating them individually.
arXiv Detail & Related papers (2025-02-27T12:17:34Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Obtaining Explainable Classification Models using Distributionally
Robust Optimization [12.511155426574563]
We study generalized linear models constructed using sets of feature value rules.
An inherent trade-off exists between rule set sparsity and its prediction accuracy.
We propose a new formulation to learn an ensemble of rule sets that simultaneously addresses these competing factors.
arXiv Detail & Related papers (2023-11-03T15:45:34Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Domain Generalization via Domain-based Covariance Minimization [4.414778226415752]
We propose a novel variance measurement for multiple domains so as to minimize the difference between conditional distributions across domains.
We show that for small-scale datasets, we are able to achieve better quantitative results indicating better generalization performance over unseen test datasets.
arXiv Detail & Related papers (2021-10-12T19:30:15Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.