Bucketized Active Sampling for Learning ACOPF
- URL: http://arxiv.org/abs/2208.07497v3
- Date: Mon, 8 Jul 2024 21:00:14 GMT
- Title: Bucketized Active Sampling for Learning ACOPF
- Authors: Michael Klamkin, Mathieu Tanneau, Terrence W. K. Mak, Pascal Van Hentenryck,
- Abstract summary: This paper proposes Bucketized Active Sampling (BAS) to meet the requirements of market-clearing applications.
BAS partitions the input domain into buckets and uses an acquisition function to determine where to sample next.
BAS also relies on an adaptive learning rate that increases and decreases over time.
- Score: 15.509961352249434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers optimization proxies for Optimal Power Flow (OPF), i.e., machine-learning models that approximate the input/output relationship of OPF. Recent work has focused on showing that such proxies can be of high fidelity. However, their training requires significant data, each instance necessitating the (offline) solving of an OPF. To meet the requirements of market-clearing applications, this paper proposes Bucketized Active Sampling (BAS), a novel active learning framework that aims at training the best possible OPF proxy within a time limit. BAS partitions the input domain into buckets and uses an acquisition function to determine where to sample next. By applying the same partitioning to the validation set, BAS leverages labeled validation samples in the selection of unlabeled samples. BAS also relies on an adaptive learning rate that increases and decreases over time. Experimental results demonstrate the benefits of BAS.
Related papers
- Training-Free Unsupervised Prompt for Vision-Language Models [27.13778811871694]
We propose Training-Free Unsupervised Prompts (TFUP) to preserve inherent representation capabilities and enhance them with a residual connection to similarity-based prediction probabilities.
TFUP achieves surprising performance, even surpassing the training-base method on multiple classification datasets.
Our TFUP-T achieves new state-of-the-art classification performance compared to unsupervised and few-shot adaptation approaches on multiple benchmarks.
arXiv Detail & Related papers (2024-04-25T05:07:50Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Exploiting Counter-Examples for Active Learning with Partial labels [45.665996618836516]
This paper studies a new problem, emphactive learning with partial labels (ALPL)
In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.
We propose a simple but effective WorseNet to directly learn from this pattern.
arXiv Detail & Related papers (2023-07-14T15:41:53Z) - BatchGFN: Generative Flow Networks for Batch Active Learning [80.73649229919454]
BatchGFN is a novel approach for pool-based active learning that uses generative flow networks to sample sets of data points proportional to a batch reward.
We show our approach enables principled sampling near-optimal utility batches at inference time with a single forward pass per point in the batch in toy regression problems.
arXiv Detail & Related papers (2023-06-26T20:41:36Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - A Novel Approach for Optimum-Path Forest Classification Using Fuzzy
Logic [13.313728527879306]
Fuzzy Optimum-Path Forest is an improved version of the standard OPF classifier.
It learns the samples' membership in an unsupervised fashion, which are further incorporated during supervised training.
Experiments conducted over twelve public datasets highlight the robustness of the proposed approach.
arXiv Detail & Related papers (2022-04-13T20:55:30Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - OPF-Learn: An Open-Source Framework for Creating Representative AC
Optimal Power Flow Datasets [0.0]
This paper develops the OPF-Learn package for Julia and Python, which uses a computationally efficient approach to create representative datasets.
The framework is shown to generate datasets that are more representative of the entire feasible space versus traditional techniques seen in the literature.
arXiv Detail & Related papers (2021-11-01T19:35:09Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - Data-driven Optimal Power Flow: A Physics-Informed Machine Learning
Approach [6.5382276424254995]
This paper proposes a data-driven approach for optimal power flow (OPF) based on the stacked extreme learning machine (SELM) framework.
A data-driven OPF regression framework is developed that decomposes the OPF model features into three stages.
Numerical results carried out on IEEE and Polish benchmark systems demonstrate that the proposed method outperforms other alternatives.
arXiv Detail & Related papers (2020-05-31T15:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.