A User Manual for cuHALLaR: A GPU Accelerated Low-Rank Semidefinite Programming Solver
- URL: http://arxiv.org/abs/2508.15951v1
- Date: Thu, 21 Aug 2025 20:45:01 GMT
- Title: A User Manual for cuHALLaR: A GPU Accelerated Low-Rank Semidefinite Programming Solver
- Authors: Jacob Aguirre, Diego Cifuentes, Vincent Guigues, Renato D. C. Monteiro, Victor Hugo Nascimento, Arnesh Sujanani,
- Abstract summary: We present a Julia-based interface to the precompiled HALLaR and cuHALLaR binaries for large-scale semidefinite programs (SDPs)<n>Both solvers are established as fast and numerically stable, and accept problem data in formats compatible with SDPA.<n>A collection of example problems is included, including the SDP relaxations of the Matrix Completion and Maximum Stable Set problems.
- Score: 0.9236074230806578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a Julia-based interface to the precompiled HALLaR and cuHALLaR binaries for large-scale semidefinite programs (SDPs). Both solvers are established as fast and numerically stable, and accept problem data in formats compatible with SDPA and a new enhanced data format taking advantage of Hybrid Sparse Low-Rank (HSLR) structure. The interface allows users to load custom data files, configure solver options, and execute experiments directly from Julia. A collection of example problems is included, including the SDP relaxations of the Matrix Completion and Maximum Stable Set problems.
Related papers
- Fast and Exact Least Absolute Deviations Line Fitting via Piecewise Affine Lower-Bounding [0.509780930114934]
Least-absolute-deviations (LAD) line fitting is robust to outliers but computationally more involved than least squares regression.<n>We propose the Piecewise Affine Lower-Bounding (PALB) method, an exact algorithm for LAD line fitting.<n>It is consistently faster than publicly available implementations of LP based and IRLS based solvers.
arXiv Detail & Related papers (2025-12-22T10:18:38Z) - PrediPrune: Reducing Verification Overhead in Souper with Machine Learning Driven Pruning [0.8295385180806493]
Souper is a powerful enumerative superoptimizer that enhances the runtime performance of programs.<n> verification process relies on a computationally expensive SMT solver to validate optimization candidates.<n>We propose PrediPrune, a candidate pruning strategy that effectively reduces the number of invalid candidates passed to the solver.
arXiv Detail & Related papers (2025-09-20T02:00:49Z) - Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights [75.83625828306839]
textbfDrag-and-Drop LLMs (textitDnD) eliminates per-task training by mapping a handful of unlabeled task prompts directly to LoRA weight updates.<n>A lightweight text encoder distills each prompt batch into condition embeddings, which are then transformed by a cascaded hyper-convolutional decoder into the full set of LoRA matrices.
arXiv Detail & Related papers (2025-06-19T15:38:21Z) - Efficient Multi-Instance Generation with Janus-Pro-Dirven Prompt Parsing [53.295515505026096]
Janus-Pro-driven Prompt Parsing is a prompt- parsing module that bridges text understanding and layout generation.<n>MIGLoRA is a parameter-efficient plug-in integrating Low-Rank Adaptation into UNet (SD1.5) and DiT (SD3) backbones.<n>The proposed method achieves state-of-the-art performance on COCO and LVIS benchmarks while maintaining parameter efficiency.
arXiv Detail & Related papers (2025-03-27T00:59:14Z) - LLMs for Cold-Start Cutting Plane Separator Configuration [19.931643536607737]
Mixed integer linear programming solvers ship with a staggering number of parameters that are challenging to select a priori for all but expert optimization users.<n>Existing machine learning approaches to configure solvers require training ML models by solving thousands of related MILP instances, generalize poorly to new problem sizes, and often require implementing complex ML pipelines and custom solver interfaces.<n>We present a new LLM-based framework to configure which cutting plane separators to use for a given MILP problem with little to no training data based on characteristics of the instance.
arXiv Detail & Related papers (2024-12-16T18:03:57Z) - Suboptimality bounds for trace-bounded SDPs enable a faster and scalable low-rank SDP solver SDPLR+ [3.7507283158673212]
Semidefinite programs (SDPs) are powerful tools with many applications in machine learning and data science.
SDP solvers are challenging because by standard the positive semidefinite decision variable is an $n times n$ dense matrix.
Two decades ago, Burer and Monteiro developed an SDP solver that optimized over a low-rank factorization instead of the full matrix.
arXiv Detail & Related papers (2024-06-14T20:31:22Z) - Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations [92.1840862558718]
In practical distributed systems, workers typically not homogeneous, and can have highly varying processing times.
We introduce a new parallel method Freya to handle arbitrarily slow computations.
We show that Freya offers significantly improved complexity guarantees compared to all previous methods.
arXiv Detail & Related papers (2024-05-24T13:33:30Z) - BosonSampling.jl: A Julia package for quantum multi-photon interferometry [0.0]
We present a free open source package for high performance simulation and numerical investigation of boson samplers and, more generally, multi-photon interferometry.
Our package is written in Julia, allowing C-like performance with easy notations and fast, high-level coding.
arXiv Detail & Related papers (2022-12-19T15:28:23Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Square Root Bundle Adjustment for Large-Scale Reconstruction [56.44094187152862]
We propose a new formulation for the bundle adjustment problem which relies on nullspace marginalization of landmark variables by QR decomposition.
Our approach, which we call square root bundle adjustment, is algebraically equivalent to the commonly used Schur complement trick.
We show in real-world experiments with the BAL datasets that even in single precision the proposed solver achieves on average equally accurate solutions.
arXiv Detail & Related papers (2021-03-02T16:26:20Z) - HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data
Analytics [0.0]
HeAT is an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API.
HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload on arbitrarily large high-performance computing systems via MPI.
When compared to similar frameworks, HeAT achieves speedups of up to two orders of magnitude.
arXiv Detail & Related papers (2020-07-27T13:33:17Z) - Picasso: A Sparse Learning Library for High Dimensional Data Analysis in
R and Python [77.33905890197269]
We describe a new library which implements a unified pathwise coordinate optimization for a variety of sparse learning problems.
The library is coded in R++ and has user-friendly sparse experiments.
arXiv Detail & Related papers (2020-06-27T02:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.