Generalization Bounds for Dependent Data using Online-to-Batch Conversion
- URL: http://arxiv.org/abs/2405.13666v2
- Date: Wed, 19 Feb 2025 13:27:10 GMT
- Title: Generalization Bounds for Dependent Data using Online-to-Batch Conversion
- Authors: Sagnik Chatterjee, Manuj Mukherjee, Alhad Sethi,
- Abstract summary: We upper bound the generalization error of batch learning algorithms trained on samples drawn from a mixing process.<n>Our work does not require any stability assumptions on the batch learner.<n>We prove that the EWA algorithm, a textbook family of online learning algorithms, satisfies our new notion of stability.
- Score: 0.6144680854063935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we upper bound the generalization error of batch learning algorithms trained on samples drawn from a mixing stochastic process (i.e., a dependent data source) both in expectation and with high probability. Unlike previous results by Mohri et al. (2010) and Fu et al. (2023), our work does not require any stability assumptions on the batch learner, which allows us to derive upper bounds for any batch learning algorithm trained on dependent data. This is made possible due to our use of the Online-to-Batch ( OTB ) conversion framework, which allows us to shift the burden of stability from the batch learner to an artificially constructed online learner. We show that our bounds are equal to the bounds in the i.i.d. setting up to a term that depends on the decay rate of the underlying mixing stochastic process. Central to our analysis is a new notion of algorithmic stability for online learning algorithms based on Wasserstein distances of order one. Furthermore, we prove that the EWA algorithm, a textbook family of online learning algorithms, satisfies our new notion of stability. Following this, we instantiate our bounds using the EWA algorithm.
Related papers
- Small Loss Bounds for Online Learning Separated Function Classes: A Gaussian Process Perspective [9.867914513513453]
We present an oracle-efficient algorithm capable of achieving small-loss bounds with improved rates in greater generality than previous work.
We also present a variant for differentially private learning that attains optimal rates, again under our separation condition.
arXiv Detail & Related papers (2025-02-14T16:52:50Z) - Data-dependent and Oracle Bounds on Forgetting in Continual Learning [7.903539618132858]
In continual learning, knowledge must be preserved and re-used between tasks.
We provide both data-dependent and oracle upper bounds that apply regardless of model and algorithm choice.
We derive an algorithm based on our bounds and demonstrate empirically that our approach yields tight bounds on forgetting for several continual learning problems.
arXiv Detail & Related papers (2024-06-13T17:50:51Z) - Geometry-Aware Instrumental Variable Regression [56.16884466478886]
We propose a transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information.
We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings.
arXiv Detail & Related papers (2024-05-19T17:49:33Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Beyond Normal: On the Evaluation of Mutual Information Estimators [52.85079110699378]
We show how to construct a diverse family of distributions with known ground-truth mutual information.
We provide guidelines for practitioners on how to select appropriate estimator adapted to the difficulty of problem considered.
arXiv Detail & Related papers (2023-06-19T17:26:34Z) - Online-to-PAC Conversions: Generalization Bounds via Regret Analysis [13.620177497267791]
We construct an online learning game called the "generalization game"
We show that the existence of an online learning algorithm with bounded regret in this game implies a bound on the generalization error of the statistical learning algorithm.
arXiv Detail & Related papers (2023-05-31T09:15:39Z) - Training Normalizing Flows from Dependent Data [31.42053454078623]
We propose a likelihood objective of normalizing flows incorporating dependencies between the data points.
We show that respecting dependencies between observations can improve empirical results on both synthetic and real-world data.
arXiv Detail & Related papers (2022-09-29T16:50:34Z) - Smoothed Online Learning is as Easy as Statistical Learning [77.00766067963195]
We provide the first oracle-efficient, no-regret algorithms in this setting.
We show that if a function class is learnable in the classical setting, then there is an oracle-efficient, no-regret algorithm for contextual bandits.
arXiv Detail & Related papers (2022-02-09T19:22:34Z) - Federated Learning with Heterogeneous Data: A Superquantile Optimization
Approach [0.0]
We present a federated learning framework that is designed to robustly deliver good performance across individual clients with heterogeneous data.
The proposed approach hinges upon aquantile-based learning training that captures the tail statistics of the error.
arXiv Detail & Related papers (2021-12-17T11:00:23Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - Learning Expected Emphatic Traces for Deep RL [32.984880782688535]
Off-policy sampling and experience replay are key for improving sample efficiency and scaling model-free temporal difference learning methods.
We develop a multi-step emphatic weighting that can be combined with replay, and a time-reversed $n$-step TD learning algorithm to learn the required emphatic weighting.
arXiv Detail & Related papers (2021-07-12T13:14:03Z) - Task-agnostic Continual Learning with Hybrid Probabilistic Models [75.01205414507243]
We propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification.
The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting.
We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST.
arXiv Detail & Related papers (2021-06-24T05:19:26Z) - Statistical Inference for High-Dimensional Linear Regression with
Blockwise Missing Data [13.48481978963297]
Blockwise missing data occurs when we integrate multisource or multimodality data where different sources or modalities contain complementary information.
We propose a computationally efficient estimator for the regression coefficient vector based on carefully constructed unbiased estimating equations.
Numerical studies and application analysis of the Alzheimer's Disease Neuroimaging Initiative data show that the proposed method performs better and benefits more from unsupervised samples than existing methods.
arXiv Detail & Related papers (2021-06-07T05:12:42Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Causal learning with sufficient statistics: an information bottleneck
approach [3.720546514089338]
Methods extracting causal information from conditional independencies between variables of a system are common.
We capitalize on the fact that the laws governing the generative mechanisms of a system often result in substructures embodied in the generative functional equation of a variable.
We propose to use the Information Bottleneck method, a technique commonly applied for dimensionality reduction, to find underlying sufficient sets of statistics.
arXiv Detail & Related papers (2020-10-12T00:20:01Z) - Batch Value-function Approximation with Only Realizability [17.692408242465763]
We make progress in a long-standing problem of batch reinforcement learning (RL): learning $Qstar$ from an exploratory dataset.
Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure.
We also discuss how BVFT can be applied to model selection among other extensions and open problems.
arXiv Detail & Related papers (2020-08-11T20:09:37Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Stable Prediction via Leveraging Seed Variable [73.9770220107874]
Previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction.
We propose a conditional independence test based algorithm to separate causal variables with a seed variable as priori, and adopt them for stable prediction.
Our algorithm outperforms state-of-the-art methods for stable prediction.
arXiv Detail & Related papers (2020-06-09T06:56:31Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.