Polynomial Precision Dependence Solutions to Alignment Research Center
Matrix Completion Problems
- URL: http://arxiv.org/abs/2401.03999v1
- Date: Mon, 8 Jan 2024 16:25:45 GMT
- Title: Polynomial Precision Dependence Solutions to Alignment Research Center
Matrix Completion Problems
- Authors: Rico Angell
- Abstract summary: The motivation for these problems is to enable efficient dependence on $varepsilon$.
Our solutions involve reframing the matrix completion problems as a semidefinite program (SDP)
- Score: 3.796436257221663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present solutions to the matrix completion problems proposed by the
Alignment Research Center that have a polynomial dependence on the precision
$\varepsilon$. The motivation for these problems is to enable efficient
computation of heuristic estimators to formally evaluate and reason about
different quantities of deep neural networks in the interest of AI alignment.
Our solutions involve reframing the matrix completion problems as a
semidefinite program (SDP) and using recent advances in spectral bundle methods
for fast, efficient, and scalable SDP solving.
Related papers
- Model-Based and Sample-Efficient AI-Assisted Math Discovery in Sphere Packing [51.30643063554434]
A leading technique for upper bounds, the three-point method, reduces the problem to solving large, high-precision semidefinite programs (SDPs)<n>We formulate SDP construction as a sequential decision process, the SDP game, in which a policy assembles SDP formulations from a set of admissible components.<n>We obtain new state-of-the-art upper bounds in dimensions $4-16$, showing that model-based search can advance computational progress in longstanding geometric problems.
arXiv Detail & Related papers (2025-12-04T14:11:52Z) - Provably data-driven projection method for quadratic programming [18.20955065779317]
We analyze the generalization guarantee for the data-driven projection matrix learning for convex quadratic programs (QPs)<n>We demonstrate that the solutions of convex QPs can be localized within a feasible region corresponding to a special active set.<n>We then extend our analysis to other settings, including learning to match the optimal solution and input-aware setting.
arXiv Detail & Related papers (2025-09-03T14:44:25Z) - High precision PINNs in unbounded domains: application to singularity formulation in PDEs [83.50980325611066]
We study the choices of neural network ansatz, sampling strategy, and optimization algorithm.<n>For 1D Burgers equation, our framework can lead to a solution with very high precision.<n>For the 2D Boussinesq equation, we obtain a solution whose loss is $4$ digits smaller than that obtained in citewang2023asymptotic with fewer training steps.
arXiv Detail & Related papers (2025-06-24T02:01:44Z) - Matrix Completion with Convex Optimization and Column Subset Selection [0.0]
We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method.
To study the influence of the matrix size, rank computation, and the proportion of missing elements on the quality of the solution and the time, we performed experiments on synthetic data.
Our thorough analysis shows that CSMC provides solutions of comparable quality to matrix completion algorithms, which are based on convex optimization.
arXiv Detail & Related papers (2024-03-04T10:36:06Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - A Block-Coordinate Approach of Multi-level Optimization with an
Application to Physics-Informed Neural Networks [0.0]
We propose a multi-level algorithm for the solution of nonlinear optimization problems and analyze its evaluation complexity.
We apply it to the solution of partial differential equations using physics-informed neural networks (PINNs) and show on a few test problems that the approach results in better solutions and significant computational savings.
arXiv Detail & Related papers (2023-05-23T19:12:02Z) - On the Global Solution of Soft k-Means [159.23423824953412]
This paper presents an algorithm to solve the Soft k-Means problem globally.
A new model, named Minimal Volume Soft kMeans (MVSkM), is proposed to address solutions non-uniqueness issue.
arXiv Detail & Related papers (2022-12-07T12:06:55Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - AI-enhanced iterative solvers for accelerating the solution of large
scale parametrized linear systems of equations [0.0]
This paper exploits up-to-date ML tools and delivers customized iterative solvers of linear equation systems.
The results indicate its superiority over conventional iterative solution schemes.
arXiv Detail & Related papers (2022-07-06T09:47:14Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - A Hybrid Quantum-Classical Heuristic to solve large-scale Integer Linear
Programs [0.4925222726301578]
We present a method that integrates any quantum algorithm capable of finding solutions to integer linear programs into the Branch-and-Price algorithm.
The role of the quantum algorithm is to find integer solutions to subproblems appearing in Branch-and-Price.
arXiv Detail & Related papers (2021-03-29T08:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.