How to Fill the Optimum Set? Population Gradient Descent with Harmless
Diversity
- URL: http://arxiv.org/abs/2202.08376v1
- Date: Wed, 16 Feb 2022 23:40:18 GMT
- Title: How to Fill the Optimum Set? Population Gradient Descent with Harmless
Diversity
- Authors: Chengyue Gong, Lemeng Wu, Qiang Liu
- Abstract summary: We propose a bi-level optimization problem of maximizing a diversity score inside the optimum set of the main loss function.
We show that our method can efficiently generate diverse solutions on a variety of applications, including text-to-image generation, text-to-mesh generation, molecular conformation generation and ensemble neural network training.
- Score: 34.790747999729284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although traditional optimization methods focus on finding a single optimal
solution, most objective functions in modern machine learning problems,
especially those in deep learning, often have multiple or infinite numbers of
optima. Therefore, it is useful to consider the problem of finding a set of
diverse points in the optimum set of an objective function. In this work, we
frame this problem as a bi-level optimization problem of maximizing a diversity
score inside the optimum set of the main loss function, and solve it with a
simple population gradient descent framework that iteratively updates the
points to maximize the diversity score in a fashion that does not hurt the
optimization of the main loss. We demonstrate that our method can efficiently
generate diverse solutions on a variety of applications, including
text-to-image generation, text-to-mesh generation, molecular conformation
generation and ensemble neural network training.
Related papers
- Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization [14.355588194787073]
Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution.
We propose a novel Tchebycheff set scalarization method to find a few representative solutions to cover a large number of objectives.
In this way, each objective can be well addressed by at least one solution in the small solution set.
arXiv Detail & Related papers (2024-05-30T03:04:57Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Can the Problem-Solving Benefits of Quality Diversity Be Obtained
Without Explicit Diversity Maintenance? [0.0]
We argue that the correct comparison should be made to emphmulti-objective optimization frameworks.
We present a method that utilizes dimensionality reduction to automatically determine a set of behavioral descriptors for an individual.
arXiv Detail & Related papers (2023-05-12T21:24:04Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Multi-Objective Quality Diversity Optimization [2.4608515808275455]
We propose an extension of the MAP-Elites algorithm in the multi-objective setting: Multi-Objective MAP-Elites (MOME)
Namely, it combines the diversity inherited from the MAP-Elites grid algorithm with the strength of multi-objective optimizations.
We evaluate our method on several tasks, from standard optimization problems to robotics simulations.
arXiv Detail & Related papers (2022-02-07T10:48:28Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - An Analysis of Phenotypic Diversity in Multi-Solution Optimization [118.97353274202749]
We show that multiobjective optimization does not always produce much diversity, multimodal optimization produces higher fitness solutions, and quality diversity is not sensitive to genetic neutrality.
An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set with quality diversity.
arXiv Detail & Related papers (2021-05-10T10:39:03Z) - A Framework to Handle Multi-modal Multi-objective Optimization in
Decomposition-based Evolutionary Algorithms [7.81768535871051]
decomposition-based evolutionary algorithms have good performance for multi-objective optimization.
They are likely to perform poorly for multi-modal multi-objective optimization due to the lack of mechanisms to maintain the solution space diversity.
This paper proposes a framework to improve the performance of decomposition-based evolutionary algorithms for multi-modal multi-objective optimization.
arXiv Detail & Related papers (2020-09-30T14:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.