Blind Polynomial Regression
- URL: http://arxiv.org/abs/2210.11874v2
- Date: Mon, 24 Oct 2022 07:25:50 GMT
- Title: Blind Polynomial Regression
- Authors: Alberto Natali and Geert Leus
- Abstract summary: We state the (potentially partial) blind regression problem, rendering some of its theoretical properties, and propose algorithmic approaches to solve it.
As a case-study, we apply our methods to a jitter-correction problem and corroborate its performance.
- Score: 28.35687689204994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fitting a polynomial to observed data is an ubiquitous task in many signal
processing and machine learning tasks, such as interpolation and prediction. In
that context, input and output pairs are available and the goal is to find the
coefficients of the polynomial. However, in many applications, the input may be
partially known or not known at all, rendering conventional regression
approaches not applicable. In this paper, we formally state the (potentially
partial) blind regression problem, illustrate some of its theoretical
properties, and propose algorithmic approaches to solve it. As a case-study, we
apply our methods to a jitter-correction problem and corroborate its
performance.
Related papers
- Automated Proof of Polynomial Inequalities via Reinforcement Learning [4.246350085706678]
This paper proposes an approach based on reinforcement learning to find a Krivine-basis representation for proving inequalities.
We also implement a tool called APPIRL (Automated Proof of Polynomial Inequalities via Reinforcement)
arXiv Detail & Related papers (2025-03-09T12:50:28Z) - Learning with Exact Invariances in Polynomial Time [43.81087760363805]
We study the statistical-computational trade-offs for learning with exact invariances (or symmetries) using kernel regression.
Our approach achieves the same excess population risk as the original kernel regression problem.
arXiv Detail & Related papers (2025-02-27T04:49:52Z) - Learning Linear Attention in Polynomial Time [115.68795790532289]
We provide the first results on learnability of single-layer Transformers with linear attention.
We show that linear attention may be viewed as a linear predictor in a suitably defined RKHS.
We show how to efficiently identify training datasets for which every empirical riskr is equivalent to the linear Transformer.
arXiv Detail & Related papers (2024-10-14T02:41:01Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Learning Functional Transduction [9.926231893220063]
We show that transductive regression principles can be meta-learned through gradient descent to form efficient in-context neural approximators.
We demonstrate the benefit of our meta-learned transductive approach to model complex physical systems influenced by varying external factors with little data.
arXiv Detail & Related papers (2023-02-01T09:14:28Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Sparse Polynomial Optimization: Theory and Practice [5.27013884159732]
Book presents several efforts to tackle this challenge with important scientific implications.
It provides alternative optimization schemes that scale well in terms of computational complexity.
We present sparsity-exploiting hierarchies of relaxations, for either unconstrained or constrained problems.
arXiv Detail & Related papers (2022-08-23T18:56:05Z) - Towards a mathematical framework to inform Neural Network modelling via
Polynomial Regression [0.0]
It is shown that almost identical predictions can be made when certain conditions are met locally.
When learning from generated data, the proposed method producess that approximate correctly the data locally.
arXiv Detail & Related papers (2021-02-07T17:56:16Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Marginal likelihood computation for model selection and hypothesis
testing: an extensive review [66.37504201165159]
This article provides a comprehensive study of the state-of-the-art of the topic.
We highlight limitations, benefits, connections and differences among the different techniques.
Problems and possible solutions with the use of improper priors are also described.
arXiv Detail & Related papers (2020-05-17T18:31:58Z) - An implicit function learning approach for parametric modal regression [36.568208312835196]
We use implicit function theorem to develop an objective, for learning a joint function over inputs and targets.
We empirically demonstrate on several synthetic problems that our method can learn multi-valued functions and produce the conditional modes.
arXiv Detail & Related papers (2020-02-14T00:37:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.