Outlier-Robust Learning of Ising Models Under Dobrushin's Condition
- URL: http://arxiv.org/abs/2102.02171v1
- Date: Wed, 3 Feb 2021 18:00:57 GMT
- Title: Outlier-Robust Learning of Ising Models Under Dobrushin's Condition
- Authors: Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart and Yuxin
Sun
- Abstract summary: We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees.
- Score: 57.89518300699042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning Ising models satisfying Dobrushin's
condition in the outlier-robust setting where a constant fraction of the
samples are adversarially corrupted. Our main result is to provide the first
computationally efficient robust learning algorithm for this problem with
near-optimal error guarantees. Our algorithm can be seen as a special case of
an algorithm for robustly learning a distribution from a general exponential
family. To prove its correctness for Ising models, we establish new
anti-concentration results for degree-$2$ polynomials of Ising models that may
be of independent interest.
Related papers
- Optimal Robust Estimation under Local and Global Corruptions: Stronger Adversary and Smaller Error [10.266928164137635]
Algorithmic robust statistics has traditionally focused on the contamination model where a small fraction of the samples are arbitrarily corrupted.
We consider a recent contamination model that combines two kinds of corruptions: (i) small fraction of arbitrary outliers, as in classical robust statistics, and (ii) local perturbations, where samples may undergo bounded shifts on average.
We show that information theoretically optimal error can indeed be achieved in time, under an even emphstronger local perturbation model.
arXiv Detail & Related papers (2024-10-22T17:51:23Z) - Integer Programming for Learning Directed Acyclic Graphs from Non-identifiable Gaussian Models [6.54203362045253]
We study the problem of learning directed acyclic graphs from continuous observational data.
We develop a mixed-integer programming framework for learning medium-sized problems.
Our method outperforms state-of-the-art algorithms and is robust to noise heteroscedasticity.
arXiv Detail & Related papers (2024-04-19T02:42:13Z) - Supervised learning with probabilistic morphisms and kernel mean
embeddings [0.0]
I propose a generative model of supervised learning that unifies two approaches to supervised learning.
I address two measurability problems, which have been ignored in statistical learning theory.
I present a variant of Vapnik-Stefanuyk's regularization method for solving ill-posed problems.
arXiv Detail & Related papers (2023-05-10T17:54:21Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Agnostic Proper Learning of Halfspaces under Gaussian Marginals [56.01192577666607]
We study the problem of agnostically learning halfspaces under the Gaussian.
Our main result is the em first proper learning algorithm for this problem.
arXiv Detail & Related papers (2021-02-10T18:40:44Z) - Robust Unsupervised Learning via L-Statistic Minimization [38.49191945141759]
We present a general approach to this problem focusing on unsupervised learning.
The key assumption is that the perturbing distribution is characterized by larger losses relative to a given class of admissible models.
We prove uniform convergence bounds with respect to the proposed criterion for several popular models in unsupervised learning.
arXiv Detail & Related papers (2020-12-14T10:36:06Z) - Sample-Optimal and Efficient Learning of Tree Ising models [24.201827888085944]
We show that $n$-variable tree-structured Ising models can be learned computationally-efficiently to within total variation distance $epsilon$ from an optimal $O(n ln n/epsilon2)$ samples.
Our guarantees do not follow from known results for the Chow-Liu algorithm.
arXiv Detail & Related papers (2020-10-28T10:17:48Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.