Sampling Theorems for Unsupervised Learning in Linear Inverse Problems
- URL: http://arxiv.org/abs/2203.12513v1
- Date: Wed, 23 Mar 2022 16:17:22 GMT
- Title: Sampling Theorems for Unsupervised Learning in Linear Inverse Problems
- Authors: Juli\'an Tachella, Dongdong Chen and Mike Davies
- Abstract summary: This paper presents necessary and sufficient sampling conditions for learning the signal model from partial measurements.
As our results are agnostic of the learning algorithm, they shed light into the fundamental limitations of learning from incomplete data.
- Score: 11.54982866872911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving a linear inverse problem requires knowledge about the underlying
signal model. In many applications, this model is a priori unknown and has to
be learned from data. However, it is impossible to learn the model using
observations obtained via a single incomplete measurement operator, as there is
no information outside the range of the inverse operator, resulting in a
chicken-and-egg problem: to learn the model we need reconstructed signals, but
to reconstruct the signals we need to know the model. Two ways to overcome this
limitation are using multiple measurement operators or assuming that the signal
model is invariant to a certain group action. In this paper, we present
necessary and sufficient sampling conditions for learning the signal model from
partial measurements which only depend on the dimension of the model, and the
number of operators or properties of the group action that the model is
invariant to. As our results are agnostic of the learning algorithm, they shed
light into the fundamental limitations of learning from incomplete data and
have implications in a wide range set of practical algorithms, such as
dictionary learning, matrix completion and deep neural networks.
Related papers
- Bayesian Model Parameter Learning in Linear Inverse Problems with Application in EEG Focal Source Imaging [49.1574468325115]
Inverse problems can be described as limited-data problems in which the signal of interest cannot be observed directly.
We studied a linear inverse problem that included an unknown non-linear model parameter.
We utilized a Bayesian model-based learning approach that allowed signal recovery and subsequently estimation of the model parameter.
arXiv Detail & Related papers (2025-01-07T18:14:24Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Toward Physically Plausible Data-Driven Models: A Novel Neural Network
Approach to Symbolic Regression [2.7071541526963805]
This paper proposes a novel neural network-based symbolic regression method.
It constructs physically plausible models based on even very small training data sets and prior knowledge about the system.
We experimentally evaluate the approach on four test systems: the TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent resistance of two resistors in parallel, and the longitudinal force of the anti-lock braking system.
arXiv Detail & Related papers (2023-02-01T22:05:04Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Sampling Theorems for Learning from Incomplete Measurements [11.54982866872911]
In many real-world settings, only incomplete measurement data are available which can pose a problem for learning.
We show that generically unsupervised learning is possible if each operator obtains at least $m>k+n/G$ measurements.
Our results have implications in a wide range of practical algorithms, from low-rank matrix recovery to deep neural networks.
arXiv Detail & Related papers (2022-01-28T14:36:47Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - On the Necessity of Auditable Algorithmic Definitions for Machine
Unlearning [13.149070833843133]
Machine unlearning, i.e. having a model forget about some of its training data, has become increasingly important as privacy legislation promotes variants of the right-to-be-forgotten.
We first show that the definition that underlies approximate unlearning, which seeks to prove the approximately unlearned model is close to an exactly retrained model, is incorrect because one can obtain the same model using different datasets.
We then turn to exact unlearning approaches and ask how to verify their claims of unlearning.
arXiv Detail & Related papers (2021-10-22T16:16:56Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.