Linear Regression without Correspondences via Concave Minimization
- URL: http://arxiv.org/abs/2003.07706v2
- Date: Sat, 12 Sep 2020 05:35:45 GMT
- Title: Linear Regression without Correspondences via Concave Minimization
- Authors: Liangzu Peng and Manolis C. Tsakiris
- Abstract summary: A signal is recovered in a linear regression setting without correspondences.
The associated maximum likelihood function is NP-hard to compute when the signal has dimension larger than one.
We reformulate it as a concave minimization problem, which we solve via branch-and-bound.
The resulting algorithm outperforms state-of-the-art methods for fully shuffled data and remains tractable for up to $8$-dimensional signals.
- Score: 24.823689223437917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear regression without correspondences concerns the recovery of a signal
in the linear regression setting, where the correspondences between the
observations and the linear functionals are unknown. The associated maximum
likelihood function is NP-hard to compute when the signal has dimension larger
than one. To optimize this objective function we reformulate it as a concave
minimization problem, which we solve via branch-and-bound. This is supported by
a computable search space to branch, an effective lower bounding scheme via
convex envelope minimization and a refined upper bound, all naturally arising
from the concave minimization reformulation. The resulting algorithm
outperforms state-of-the-art methods for fully shuffled data and remains
tractable for up to $8$-dimensional signals, an untouched regime in prior work.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning [53.97335841137496]
We propose an oracle-efficient algorithm, dubbed Pessimistic Least-Square Value Iteration (PNLSVI) for offline RL with non-linear function approximation.
Our algorithm enjoys a regret bound that has a tight dependency on the function class complexity and achieves minimax optimal instance-dependent regret when specialized to linear function approximation.
arXiv Detail & Related papers (2023-10-02T17:42:01Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Shuffled linear regression through graduated convex relaxation [12.614901374282868]
The shuffled linear regression problem aims to recover linear relationships in datasets where the correspondence between input and output is unknown.
This problem arises in a wide range of applications including survey data.
We propose a novel optimization algorithm for shuffled linear regression based on a posterior-maximizing objective function.
arXiv Detail & Related papers (2022-09-30T17:33:48Z) - Hardness and Algorithms for Robust and Sparse Optimization [17.842787715567436]
We explore algorithms and limitations for sparse optimization problems such as sparse linear regression and robust linear regression.
Specifically, the sparse linear regression problem seeks a $k$-sparse vector $xinmathbbRd$ to minimize $|Ax-b|$.
The robust linear regression problem seeks a set $S$ that ignores at most $k$ rows and a vector $x$ to minimize $|(Ax-b)_S|$.
arXiv Detail & Related papers (2022-06-29T01:40:38Z) - On Optimal Interpolation In Linear Regression [22.310861786709538]
We show that the optimal way to interpolate in linear regression is to use functions that are linear in the response variable.
We identify a regime where the minimum-norm interpolator provably generalizes arbitrarily worse than the optimal response-linear achievable interpolator.
We extend the notion of optimal response-linear to random features regression under a linear data-generating model.
arXiv Detail & Related papers (2021-10-21T16:37:10Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - Persistent Reductions in Regularized Loss Minimization for Variable
Selection [3.3504365823045035]
We show that for a broad class of loss functions it is possible to efficiently identify a subset which are guaranteed to have zero coefficients.
We employ an existing ray algorithm for extreme ray identification and make our guarantee algorithm applicable in ultra-high dimensional problems.
arXiv Detail & Related papers (2020-11-30T04:59:44Z) - Sparse Signal Reconstruction for Nonlinear Models via Piecewise Rational
Optimization [27.080837460030583]
We propose a method to reconstruct degraded signals by a nonlinear distortion and at a limited sampling rate.
Our method formulates as a non exact fitting term and a penalization term.
It is shown how to use the problem in terms of the benefits of our simulations.
arXiv Detail & Related papers (2020-10-29T09:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.