Heteroscedasticity-aware residuals-based contextual stochastic
optimization
- URL: http://arxiv.org/abs/2101.03139v1
- Date: Fri, 8 Jan 2021 18:11:21 GMT
- Title: Heteroscedasticity-aware residuals-based contextual stochastic
optimization
- Authors: Rohit Kannan and G\"uzin Bayraksan and James Luedtke
- Abstract summary: We explore generalizations of some integrated learning and optimization frameworks for data-driven contextual optimization.
We identify conditions on the program, data generation process, and the prediction setup under which these generalizations possess and finite sample guarantees.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore generalizations of some integrated learning and optimization
frameworks for data-driven contextual stochastic optimization that can adapt to
heteroscedasticity. We identify conditions on the stochastic program, data
generation process, and the prediction setup under which these generalizations
possess asymptotic and finite sample guarantees for a class of stochastic
programs, including two-stage stochastic mixed-integer programs with continuous
recourse. We verify that our assumptions hold for popular parametric and
nonparametric regression methods.
Related papers
- Asymptotic regularity of a generalised stochastic Halpern scheme with applications [0.0]
We provide highly uniform rates of regularity for a general Halpern-style iteration, which incorporates a second mapping in the style of a Krasnoselskii-Mannsetnek with Tikhonov regularization terms.
We sketch how the schemes presented here can be instantiated in the context of learning to yield novel methods for Q-learning.
arXiv Detail & Related papers (2024-11-07T16:32:50Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization [29.24821214671497]
Training machine learning and statistical models often involve optimizing a data-driven risk criterion.
We propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences.
For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations.
arXiv Detail & Related papers (2024-01-28T21:19:15Z) - Likelihood-based inference and forecasting for trawl processes: a
stochastic optimization approach [0.0]
We develop the first likelihood-based methodology for the inference of real-valued trawl processes.
We introduce novel deterministic and probabilistic forecasting methods.
We release a Python library which can be used to fit a large class of trawl processes.
arXiv Detail & Related papers (2023-08-30T15:37:48Z) - Stochastic Learning Rate Optimization in the Stochastic Approximation
and Online Learning Settings [0.0]
In this work, multiplicativeity is applied to the learning rate of optimization algorithms, giving rise to learning-rate schemes.
In this work, theoretical convergence results of Gradient Descent equipped with this novel learning rate scheme are presented.
arXiv Detail & Related papers (2021-10-20T18:10:03Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Variance Regularization for Accelerating Stochastic Optimization [14.545770519120898]
We propose a universal principle which reduces the random error accumulation by exploiting statistic information hidden in mini-batch gradients.
This is achieved by regularizing the learning-rate according to mini-batch variances.
arXiv Detail & Related papers (2020-08-13T15:34:01Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Distributed Sketching Methods for Privacy Preserving Regression [54.51566432934556]
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches.
We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments.
arXiv Detail & Related papers (2020-02-16T08:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.