Optimizing NOTEARS Objectives via Topological Swaps
- URL: http://arxiv.org/abs/2305.17277v1
- Date: Fri, 26 May 2023 21:49:37 GMT
- Title: Optimizing NOTEARS Objectives via Topological Swaps
- Authors: Chang Deng, Kevin Bello, Bryon Aragam, Pradeep Ravikumar
- Abstract summary: In this paper, we develop an effective method for a set of candidate algorithms.
At the inner level, given a subject, we utilize off-the-the-art constraints.
The proposed method significantly improves the score of other algorithms.
- Score: 41.18829644248979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, an intriguing class of non-convex optimization problems has emerged
in the context of learning directed acyclic graphs (DAGs). These problems
involve minimizing a given loss or score function, subject to a non-convex
continuous constraint that penalizes the presence of cycles in a graph. In this
work, we delve into the optimization challenges associated with this class of
non-convex programs. To address these challenges, we propose a bi-level
algorithm that leverages the non-convex constraint in a novel way. The outer
level of the algorithm optimizes over topological orders by iteratively
swapping pairs of nodes within the topological order of a DAG. A key innovation
of our approach is the development of an effective method for generating a set
of candidate swapping pairs for each iteration. At the inner level, given a
topological order, we utilize off-the-shelf solvers that can handle linear
constraints. The key advantage of our proposed algorithm is that it is
guaranteed to find a local minimum or a KKT point under weaker conditions
compared to previous work and finds solutions with lower scores. Extensive
experiments demonstrate that our method outperforms state-of-the-art approaches
in terms of achieving a better score. Additionally, our method can also be used
as a post-processing algorithm to significantly improve the score of other
algorithms. Code implementing the proposed method is available at
https://github.com/duntrain/topo.
Related papers
- A Penalty-Based Guardrail Algorithm for Non-Decreasing Optimization with Inequality Constraints [1.5498250598583487]
Traditional mathematical programming solvers require long computational times to solve constrained minimization problems.
We propose a penalty-based guardrail algorithm (PGA) to efficiently solve them.
arXiv Detail & Related papers (2024-05-03T10:37:34Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - A Single-Loop Gradient Descent and Perturbed Ascent Algorithm for
Nonconvex Functional Constrained Optimization [27.07082875330508]
Non constrained inequality problems can be used to model a number machine learning problems, such as multi-class Neyman oracle.
Under such a mild condition of regularity it is difficult to balance reduction alternating value loss and reduction constraint violation.
In this paper, we propose a novel primal-dual inequality constrained problems algorithm.
arXiv Detail & Related papers (2022-07-12T16:30:34Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Accelerated Algorithms for Convex and Non-Convex Optimization on
Manifolds [9.632674803757475]
We propose a scheme for solving convex and non- optimization problems on distance.
Our proposed algorithm adapts to the level of complexity in the objective function.
arXiv Detail & Related papers (2020-10-18T02:48:22Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.