Dataless Quadratic Neural Networks for the Maximum Independent Set Problem
- URL: http://arxiv.org/abs/2406.19532v1
- Date: Thu, 27 Jun 2024 21:12:48 GMT
- Title: Dataless Quadratic Neural Networks for the Maximum Independent Set Problem
- Authors: Ismail Alkhouri, Cedric Le Denmat, Yingjie Li, Cunxi Yu, Jia Liu, Rongrong Wang, Alvaro Velasquez,
- Abstract summary: This paper introduces a novel dataless quadratic neural network formulation, featuring a continuous quadratic relaxation for the Maximum Independent Set (MIS) problem.
Our method eliminates the need for training data by treating the given MIS instance as a trainable entity.
By employing a gradient-based optimization like ADAM and leveraging an efficient off-the-shelf GPU parallel implementation, our approach demonstrates competitive or superior performance compared to state-of-the-art learning-based methods.
- Score: 23.643727259409744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combinatorial Optimization (CO) plays a crucial role in addressing various significant problems, among them the challenging Maximum Independent Set (MIS) problem. In light of recent advancements in deep learning methods, efforts have been directed towards leveraging data-driven learning approaches, typically rooted in supervised learning and reinforcement learning, to tackle the NP-hard MIS problem. However, these approaches rely on labeled datasets, exhibit weak generalization, and often depend on problem-specific heuristics. Recently, ReLU-based dataless neural networks were introduced to address combinatorial optimization problems. This paper introduces a novel dataless quadratic neural network formulation, featuring a continuous quadratic relaxation for the MIS problem. Notably, our method eliminates the need for training data by treating the given MIS instance as a trainable entity. More specifically, the graph structure and constraints of the MIS instance are used to define the structure and parameters of the neural network such that training it on a fixed input provides a solution to the problem, thereby setting it apart from traditional supervised or reinforcement learning approaches. By employing a gradient-based optimization algorithm like ADAM and leveraging an efficient off-the-shelf GPU parallel implementation, our straightforward yet effective approach demonstrates competitive or superior performance compared to state-of-the-art learning-based methods. Another significant advantage of our approach is that, unlike exact and heuristic solvers, the running time of our method scales only with the number of nodes in the graph, not the number of edges.
Related papers
- QAOA Parameter Transferability for Maximum Independent Set using Graph Attention Networks [1.4660766608996791]
The quantum approximate optimization algorithm (QAOA) is one of the promising variational approaches of quantum computing to solve hybrid variational problems.
In this work, we propose a QAOA parameter transfer scheme using Graph Attention Networks (GAT) to solve Independent Set (MIS) problems.
We also design a distributed resource-aware algorithm for MIS (HyDRAMIS) which decomposes large problems into smaller problems.
arXiv Detail & Related papers (2025-04-29T19:41:05Z) - A Linearized Alternating Direction Multiplier Method for Federated Matrix Completion Problems [2.2217927229805032]
Matrix completion is fundamental for predicting missing data with a wide range of applications in personalized healthcare, e-commerce, recommendation systems, and social network analysis.<n>Traditional matrix completion approaches typically assume centralized data storage.<n>We propose textttFedMC-ADMM for solving federated matrix completion problems.
arXiv Detail & Related papers (2025-03-17T01:57:06Z) - Learning to Match Unpaired Data with Minimum Entropy Coupling [7.399561232927219]
Minimum Entropy Coupling seeks to minimize the joint Entropy, while satisfying constraints on the marginals.
We propose a novel method to solve the continuous MEC problem, using well-known generative diffusion models.
We empirically demonstrate that our method, DDMEC, is general and can be easily used to address challenging tasks.
arXiv Detail & Related papers (2025-03-11T14:54:14Z) - On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques [65.55451717632317]
We study Preference-Based Multi-Agent Reinforcement Learning (PbMARL)
We identify the Nash equilibrium from a preference-only offline dataset in general-sum games.
Our findings underscore the multifaceted approach required for PbMARL.
arXiv Detail & Related papers (2024-09-01T13:14:41Z) - From Maximum Cut to Maximum Independent Set [7.250073177017239]
It has long been known that the Maximum Independent Set (MIS) problem could also be related to a specific Ising model.
It turns out that this strategy greatly improves the approximation for the independence number of random ErdHos-R'enyi graphs.
It also exhibits perfect performance on a benchmark arising from coding theory.
arXiv Detail & Related papers (2024-08-13T09:33:41Z) - Efficient k-means with Individual Fairness via Exponential Tilting [13.72284501663574]
In location-based resource allocation scenarios, individually fair clustering is often employed to achieve the principle of treating all points equally.
This paper proposes a novel algorithm, tilted k-means (TKM), aiming to achieve individual fairness in clustering.
arXiv Detail & Related papers (2024-06-24T11:50:31Z) - An Efficient Difference-of-Convex Solver for Privacy Funnel [3.069335774032178]
We propose an efficient solver for the privacy funnel (PF) method.
The proposed DC separation results in a closed-form update equation.
We evaluate the proposed solver with MNIST and Fashion datasets.
arXiv Detail & Related papers (2024-03-02T01:05:25Z) - Maximum Independent Set: Self-Training through Dynamic Programming [56.670639478539485]
This work presents a graph neural network (GNN) framework for solving the maximum independent set (MIS) problem, inspired by dynamic programming (DP)
We propose a DP-like recursive algorithm based on GNNs that firstly constructs two smaller sub-graphs, predicts the one with the larger MIS, and then uses it in the next recursion call.
Annotating comparisons of different graphs concerning their MIS size leads to a self-training process that results in more accurate self-annotation of the comparisons and vice versa.
arXiv Detail & Related papers (2023-10-28T10:58:25Z) - Solving optimization problems with local light shift encoding on Rydberg
quantum annealers [0.0]
We provide a non-unit disk framework to solve optimization problems on a Rydberg quantum annealer.
Our setup consists of a many-body interacting Rydberg system where locally controllable light shifts are applied to individual qubits.
Our numerical simulations implement the local-detuning protocol while globally driving the Rydberg annealer to the desired many-body ground state.
arXiv Detail & Related papers (2023-08-15T14:24:45Z) - On the Global Solution of Soft k-Means [159.23423824953412]
This paper presents an algorithm to solve the Soft k-Means problem globally.
A new model, named Minimal Volume Soft kMeans (MVSkM), is proposed to address solutions non-uniqueness issue.
arXiv Detail & Related papers (2022-12-07T12:06:55Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium
Learning from Offline Datasets [101.5329678997916]
We study episodic two-player zero-sum Markov games (MGs) in the offline setting.
The goal is to find an approximate Nash equilibrium (NE) policy pair based on a dataset collected a priori.
arXiv Detail & Related papers (2022-02-15T15:39:30Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Learning What to Defer for Maximum Independent Sets [84.00112106334655]
We propose a novel DRL scheme, coined learning what to defer (LwD), where the agent adaptively shrinks or stretch the number of stages by learning to distribute the element-wise decisions of the solution at each stage.
We apply the proposed framework to the maximum independent set (MIS) problem, and demonstrate its significant improvement over the current state-of-the-art DRL scheme.
arXiv Detail & Related papers (2020-06-17T02:19:31Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.