Effective End-to-End Learning Framework for Economic Dispatch
- URL: http://arxiv.org/abs/2002.12755v1
- Date: Sat, 22 Feb 2020 08:04:27 GMT
- Title: Effective End-to-End Learning Framework for Economic Dispatch
- Authors: Chenbei Lu, Kui Wang, Chenye Wu
- Abstract summary: We adopt the notion of end-to-end machine learning and propose a task-specific learning criteria to conduct economic dispatch.
We provide both theoretical analysis and empirical insights to highlight the effectiveness and efficiency of the proposed learning framework.
- Score: 3.034038412630808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional wisdom to improve the effectiveness of economic dispatch is to
design the load forecasting method as accurately as possible. However, this
approach can be problematic due to the temporal and spatial correlations
between system cost and load prediction errors. This motivates us to adopt the
notion of end-to-end machine learning and to propose a task-specific learning
criteria to conduct economic dispatch. Specifically, to maximize the data
utilization, we design an efficient optimization kernel for the learning
process. We provide both theoretical analysis and empirical insights to
highlight the effectiveness and efficiency of the proposed learning framework.
Related papers
- RLER-TTE: An Efficient and Effective Framework for En Route Travel Time Estimation with Reinforcement Learning [5.4674463400564886]
En Route Travel Time Estimation aims to learn driving patterns from traveled routes to achieve rapid and accurate real-time predictions.
Existing methods ignore the complexity and dynamism of real-world traffic systems, resulting in significant gaps in efficiency and accuracy in real-time scenarios.
This paper proposes a novel framework that redefines the path implementation of ER-TTE to achieve highly efficient and effective predictions.
arXiv Detail & Related papers (2025-01-26T11:49:34Z) - Efficiency optimization of large-scale language models based on deep learning in natural language processing tasks [6.596361762662328]
Internal structure and operation mechanism of large-scale language models are analyzed theoretically.
We evaluate the contribution of adaptive optimization algorithms (such as AdamW), massively parallel computing techniques, and mixed precision training strategies.
arXiv Detail & Related papers (2024-05-20T00:10:00Z) - Compute-Efficient Active Learning [0.0]
Active learning aims at reducing labeling costs by selecting the most informative samples from an unlabeled dataset.
Traditional active learning process often demands extensive computational resources, hindering scalability and efficiency.
We present a novel method designed to alleviate the computational burden associated with active learning on massive datasets.
arXiv Detail & Related papers (2024-01-15T12:32:07Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Towards a General Framework for Continual Learning with Pre-training [55.88910947643436]
We present a general framework for continual learning of sequentially arrived tasks with the use of pre-training.
We decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction.
We propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics.
arXiv Detail & Related papers (2023-10-21T02:03:38Z) - Online Learning and Optimization for Queues with Unknown Demand Curve
and Service Distribution [26.720986177499338]
We investigate an optimization problem in a queueing system where the service provider selects the optimal service fee p and service capacity mu.
We develop an online learning framework that automatically incorporates the parameter estimation errors in the solution prescription process.
arXiv Detail & Related papers (2023-03-06T08:47:40Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - TRAIL: Near-Optimal Imitation Learning with Suboptimal Data [100.83688818427915]
We present training objectives that use offline datasets to learn a factored transition model.
Our theoretical analysis shows that the learned latent action space can boost the sample-efficiency of downstream imitation learning.
To learn the latent action space in practice, we propose TRAIL (Transition-Reparametrized Actions for Imitation Learning), an algorithm that learns an energy-based transition model.
arXiv Detail & Related papers (2021-10-27T21:05:00Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning [100.73223416589596]
We propose a cost-sensitive portfolio selection method with deep reinforcement learning.
Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations.
A new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning.
arXiv Detail & Related papers (2020-03-06T06:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.