Meta Learning for High-dimensional Ising Model Selection Using
$\ell_1$-regularized Logistic Regression
- URL: http://arxiv.org/abs/2208.09539v1
- Date: Fri, 19 Aug 2022 20:28:39 GMT
- Title: Meta Learning for High-dimensional Ising Model Selection Using
$\ell_1$-regularized Logistic Regression
- Authors: Huiming Xie, Jean Honorio
- Abstract summary: We consider the meta learning problem for estimating the graphs associated with high-dimensional Ising models.
Our goal is to use the information learned from the auxiliary tasks in the learning of the novel task to reduce its sufficient sample complexity.
- Score: 28.776950569604026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the meta learning problem for estimating the
graphs associated with high-dimensional Ising models, using the method of
$\ell_1$-regularized logistic regression for neighborhood selection of each
node. Our goal is to use the information learned from the auxiliary tasks in
the learning of the novel task to reduce its sufficient sample complexity. To
this end, we propose a novel generative model as well as an improper estimation
method. In our setting, all the tasks are \emph{similar} in their \emph{random}
model parameters and supports. By pooling all the samples from the auxiliary
tasks to \emph{improperly} estimate a single parameter vector, we can recover
the true support union, assumed small in size, with a high probability with a
sufficient sample complexity of $\Omega(1) $ per task, for $K = \Omega(d^3 \log
p ) $ tasks of Ising models with $p$ nodes and a maximum neighborhood size $d$.
Then, with the support for the novel task restricted to the estimated support
union, we prove that consistent neighborhood selection for the novel task can
be obtained with a reduced sufficient sample complexity of $\Omega(d^3 \log
d)$.
Related papers
- On the Sample Complexity of Representation Learning in Multi-task
Bandits with Global and Local structure [77.60508571062958]
We investigate the sample complexity of learning the optimal arm for multi-task bandit problems.
Arms consist of two components: one that is shared across tasks (that we call representation) and one that is task-specific (that we call predictor)
We devise an algorithm OSRL-SC whose sample complexity approaches the lower bound, and scales at most as $H(Glog(delta_G)+ Xlog(delta_H))$, with $X,G,H$ being, respectively, the number of tasks, representations and predictors.
arXiv Detail & Related papers (2022-11-28T08:40:12Z) - Meta Sparse Principal Component Analysis [31.403997435274604]
We study the meta-learning for support (i.e. the set of non-zero entries) recovery in high-dimensional Principal Component Analysis.
We reduce the sufficient sample complexity in a novel task with the information that is learned from auxiliary tasks.
arXiv Detail & Related papers (2022-08-18T16:28:31Z) - Sample Efficient Linear Meta-Learning by Alternating Minimization [74.40553081646995]
We study a simple alternating minimization method (MLLAM) which alternately learns the low-dimensional subspace and the regressors.
We show that for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only $Omega(log d)$ samples per task.
We propose a novel task subset selection scheme that ensures the same strong statistical guarantee as MLLAM.
arXiv Detail & Related papers (2021-05-18T06:46:48Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - A Provably Efficient Sample Collection Strategy for Reinforcement
Learning [123.69175280309226]
One of the challenges in online reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior.
We propose to tackle the exploration-exploitation problem following a decoupled approach composed of: 1) An "objective-specific" algorithm that prescribes how many samples to collect at which states, as if it has access to a generative model (i.e., sparse simulator of the environment); 2) An "objective-agnostic" sample collection responsible for generating the prescribed samples as fast as possible.
arXiv Detail & Related papers (2020-07-13T15:17:35Z) - Meta Learning for Support Recovery in High-dimensional Precision Matrix
Estimation [31.41834200409171]
We study meta learning for support (i.e., the set of non-zero entries) recovery in high-dimensional precision matrix estimation.
In our setup, each task has a different random true precision matrix, each with a possibly different support.
We prove a matching information-theoretic lower bound for the necessary number of samples, which is $n in Omega(log N)/K)$, and thus, our algorithm is minimax optimal.
arXiv Detail & Related papers (2020-06-22T20:24:52Z) - On the Theory of Transfer Learning: The Importance of Task Diversity [114.656572506859]
We consider $t+1$ tasks parameterized by functions of the form $f_j circ h$ in a general function class $mathcalF circ mathcalH$.
We show that for diverse training tasks the sample complexity needed to learn the shared representation across the first $t$ training tasks scales as $C(mathcalH) + t C(mathcalF)$.
arXiv Detail & Related papers (2020-06-20T20:33:59Z) - Robust Meta-learning for Mixed Linear Regression with Small Batches [34.94138630547603]
We study a fundamental question: can abundant small-data tasks compensate for the lack of big-data tasks?
Existing approaches show that such a trade-off is efficiently achievable, with the help of medium-sized tasks with $Omega(k1/2)$ examples each.
We introduce a spectral approach that is simultaneously robust under both scenarios.
arXiv Detail & Related papers (2020-06-17T07:59:05Z) - A Randomized Algorithm to Reduce the Support of Discrete Measures [79.55586575988292]
Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms.
We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by "greedy geometric sampling"
We then study its properties, and benchmark it on synthetic and real-world data to show that it can be very beneficial in the $Ngg n$ regime.
arXiv Detail & Related papers (2020-06-02T16:38:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.