Active Learning in the Predict-then-Optimize Framework: A Margin-Based
Approach
- URL: http://arxiv.org/abs/2305.06584v1
- Date: Thu, 11 May 2023 05:44:36 GMT
- Title: Active Learning in the Predict-then-Optimize Framework: A Margin-Based
Approach
- Authors: Mo Liu, Paul Grigas, Heyuan Liu, Zuo-Jun Max Shen
- Abstract summary: We develop a learning method that sequentially decides whether to request the "labels" of feature samples from an unlabeled data stream.
Our active learning method is the first to be directly informed by the decision error induced by the predicted parameters.
- Score: 5.371816551086118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop the first active learning method in the predict-then-optimize
framework. Specifically, we develop a learning method that sequentially decides
whether to request the "labels" of feature samples from an unlabeled data
stream, where the labels correspond to the parameters of an optimization model
for decision-making. Our active learning method is the first to be directly
informed by the decision error induced by the predicted parameters, which is
referred to as the Smart Predict-then-Optimize (SPO) loss. Motivated by the
structure of the SPO loss, our algorithm adopts a margin-based criterion
utilizing the concept of distance to degeneracy and minimizes a tractable
surrogate of the SPO loss on the collected data. In particular, we develop an
efficient active learning algorithm with both hard and soft rejection variants,
each with theoretical excess risk (i.e., generalization) guarantees. We further
derive bounds on the label complexity, which refers to the number of samples
whose labels are acquired to achieve a desired small level of SPO risk. Under
some natural low-noise conditions, we show that these bounds can be better than
the naive supervised learning approach that labels all samples. Furthermore,
when using the SPO+ loss function, a specialized surrogate of the SPO loss, we
derive a significantly smaller label complexity under separability conditions.
We also present numerical evidence showing the practical value of our proposed
algorithms in the settings of personalized pricing and the shortest path
problem.
Related papers
- Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Easy Learning from Label Proportions [17.71834385754893]
Easyllp is a flexible and simple-to-implement debiasing approach based on aggregate labels.
Our technique allows us to accurately estimate the expected loss of an arbitrary model at an individual level.
arXiv Detail & Related papers (2023-02-06T20:41:38Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - Knowledge-driven Active Learning [70.37119719069499]
Active learning strategies aim at minimizing the amount of labelled data required to train a Deep Learning model.
Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary.
Here we propose to take into consideration common domain-knowledge and enable non-expert users to train a model with fewer samples.
arXiv Detail & Related papers (2021-10-15T06:11:53Z) - Neural Active Learning with Performance Guarantees [37.16062387461106]
We investigate the problem of active learning in the streaming setting in non-parametric regimes, where the labels are generated from a class of functions on which we make no assumptions whatsoever.
We rely on recently proposed Neural Tangent Kernel (NTK) approximation tools to construct a suitable neural embedding that determines the feature space the algorithm operates on and the learned model computed atop.
arXiv Detail & Related papers (2021-06-06T20:44:23Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.