A General Stochastic Optimization Framework for Convergence Bidding
- URL: http://arxiv.org/abs/2210.06543v1
- Date: Wed, 12 Oct 2022 19:14:07 GMT
- Title: A General Stochastic Optimization Framework for Convergence Bidding
- Authors: Letif Mones and Sean Lovett
- Abstract summary: We introduce a general optimization framework to obtain optimal convergence (virtual) bid curves.
Within this framework, we develop a linear programming-based optimization model, which produces bid prices and volumes simultaneously.
We show that different approximations and simplifications in the general model lead naturally to well-known convergence bidding approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a general stochastic optimization framework to obtain optimal
convergence (virtual) bid curves. Within this framework, we develop a
computationally tractable linear programming-based optimization model, which
produces bid prices and volumes simultaneously. We also show that different
approximations and simplifications in the general model lead naturally to
well-known convergence bidding approaches, such as self-scheduling and
opportunistic approaches.
Related papers
- Evaluation of Prosumer Networks for Peak Load Management in Iran: A Distributed Contextual Stochastic Optimization Approach [0.0]
This paper introduces a novel prosumers network framework aimed at mitigating peak loads in Iran.
A cost-oriented integrated prediction and optimization approach is proposed, empowering prosumers to make informed decisions.
Numerical results highlights that integrating prediction with optimization and implementing a contextual information-sharing network among prosumers significantly reduces peak loads as well as total costs.
arXiv Detail & Related papers (2024-08-31T16:09:38Z) - Pseudo-Bayesian Optimization [7.556071491014536]
We study an axiomatic framework that elicits the minimal requirements to guarantee black-box optimization convergence.
We show how using simple local regression, and a suitable "randomized prior" construction to quantify uncertainty, not only guarantees convergence but also consistently outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-10-15T07:55:28Z) - PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates [17.777466668123886]
We introduce PROMISE ($textbfPr$econditioned $textbfO$ptimization $textbfM$ethods by $textbfI$ncorporating $textbfS$calable Curvature $textbfE$stimates), a suite of sketching-based preconditioned gradient algorithms.
PROMISE includes preconditioned versions of SVRG, SAGA, and Katyusha.
arXiv Detail & Related papers (2023-09-05T07:49:10Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - Accelerated Federated Learning with Decoupled Adaptive Optimization [53.230515878096426]
federated learning (FL) framework enables clients to collaboratively learn a shared model while keeping privacy of training data on clients.
Recently, many iterations efforts have been made to generalize centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc., to federated settings.
This work aims to develop novel adaptive optimization methods for FL from the perspective of dynamics of ordinary differential equations (ODEs)
arXiv Detail & Related papers (2022-07-14T22:46:43Z) - A Unified Convergence Theorem for Stochastic Optimization Methods [4.94128206910124]
We provide a fundamental unified convergence theorem used for deriving convergence results for a series of unified optimization methods.
As a direct application, we recover almost sure convergence results under general settings.
arXiv Detail & Related papers (2022-06-08T14:01:42Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - MINA: Convex Mixed-Integer Programming for Non-Rigid Shape Alignment [77.38594866794429]
convex mixed-integer programming formulation for non-rigid shape matching.
We propose a novel shape deformation model based on an efficient low-dimensional discrete model.
arXiv Detail & Related papers (2020-02-28T09:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.