Mediated Uncoupled Learning: Learning Functions without Direct
Input-output Correspondences
- URL: http://arxiv.org/abs/2107.08135v1
- Date: Fri, 16 Jul 2021 22:13:29 GMT
- Title: Mediated Uncoupled Learning: Learning Functions without Direct
Input-output Correspondences
- Authors: Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama
- Abstract summary: We consider the task of predicting $Y$ from $X$ when we have no paired data of them.
A naive approach is to predict $U$ from $X$ using $S_X$ and then $Y$ from $U$ using $S_Y$.
We propose a new method that avoids predicting $U$ but directly learns $Y = f(X)$ by training $f(X)$ with $S_X$ to predict $h(U)$.
- Score: 80.95776331769899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ordinary supervised learning is useful when we have paired training data of
input $X$ and output $Y$. However, such paired data can be difficult to collect
in practice. In this paper, we consider the task of predicting $Y$ from $X$
when we have no paired data of them, but we have two separate, independent
datasets of $X$ and $Y$ each observed with some mediating variable $U$, that
is, we have two datasets $S_X = \{(X_i, U_i)\}$ and $S_Y = \{(U'_j, Y'_j)\}$. A
naive approach is to predict $U$ from $X$ using $S_X$ and then $Y$ from $U$
using $S_Y$, but we show that this is not statistically consistent. Moreover,
predicting $U$ can be more difficult than predicting $Y$ in practice, e.g.,
when $U$ has higher dimensionality. To circumvent the difficulty, we propose a
new method that avoids predicting $U$ but directly learns $Y = f(X)$ by
training $f(X)$ with $S_{X}$ to predict $h(U)$ which is trained with $S_{Y}$ to
approximate $Y$. We prove statistical consistency and error bounds of our
method and experimentally confirm its practical usefulness.
Related papers
- Guarantees for Nonlinear Representation Learning: Non-identical Covariates, Dependent Data, Fewer Samples [24.45016514352055]
We study the sample-complexity of learning $T+1$ functions $f_star(t) circ g_star$ from a function class $mathcal F times mathcal G$.
We show that as the number of tasks $T$ increases, both the sample requirement and risk bound converge to that of $r$-dimensional regression.
arXiv Detail & Related papers (2024-10-15T03:20:19Z) - LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions [54.54897832889028]
We show that for any $K$, there is a universal set" $U subset [n]$ of size independent of $n$, such that for any $Q$ and any row $i$, the large attention scores $A_i,j$ in row $i$ of $A$ all have $jin U$.
We empirically show the benefits of our scheme for vision transformers, showing how to train new models that use our universal set while training as well.
arXiv Detail & Related papers (2024-10-07T19:47:13Z) - IT$^3$: Idempotent Test-Time Training [95.78053599609044]
This paper introduces Idempotent Test-Time Training (IT$3$), a novel approach to addressing the challenge of distribution shift.
IT$3$ is based on the universal property of idempotence.
We demonstrate the versatility of our approach across various tasks, including corrupted image classification.
arXiv Detail & Related papers (2024-10-05T15:39:51Z) - Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization [65.8915778873691]
conditional distributions is a central problem in machine learning.
We propose a new learning paradigm that integrates both paired and unpaired data.
Our approach also connects intriguingly with inverse entropic optimal transport (OT)
arXiv Detail & Related papers (2024-10-03T16:12:59Z) - Phase Transitions in the Detection of Correlated Databases [12.010807505655238]
We study the problem of detecting the correlation between two Gaussian databases $mathsfXinmathbbRntimes d$ and $mathsfYntimes d$, each composed of $n$ users with $d$ features.
This problem is relevant in the analysis of social media, computational biology, etc.
arXiv Detail & Related papers (2023-02-07T10:39:44Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Agnostic learning with unknown utilities [70.14742836006042]
In many real-world problems, the utility of a decision depends on the underlying context $x$ and decision $y$.
We study this as agnostic learning with unknown utilities.
We show that estimating the utilities of only the sampled points$S$ suffices to learn a decision function which generalizes well.
arXiv Detail & Related papers (2021-04-17T08:22:04Z) - Faster Uncertainty Quantification for Inverse Problems with Conditional
Normalizing Flows [0.9176056742068814]
In inverse problems, we often have data consisting of paired samples $(x,y)sim p_X,Y(x,y)$ where $y$ are partial observations of a physical system.
We propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator $q_theta(x|y)$.
arXiv Detail & Related papers (2020-07-15T20:36:30Z) - Learning and Testing Variable Partitions [13.575794982844222]
We show that $mathcalO(k n2)(delta + epsilon)$ can be learned in time $tildemathcalO(n2 mathrmpoly (1/epsilon)$ for any $epsilon > 0$.
We also show that even two-sided testers require $Omega(n)$ queries when $k = 2$.
arXiv Detail & Related papers (2020-03-29T10:12:32Z) - Sample Amplification: Increasing Dataset Size even when Learning is Impossible [15.864702679819544]
Given data drawn from an unknown distribution, $D$, to what extent is it possible to amplify'' this dataset and output an even larger set of samples that appear to have been drawn from $D$?
We formalize this question as follows: an $left(n, n + Theta(fracnsqrtk)right)$ amplifier exists, even though learning the distribution to small constant total variation distance requires $Theta(d)$ samples.
arXiv Detail & Related papers (2019-04-26T21:42:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.