Active Level Set Estimation for Continuous Search Space with Theoretical
Guarantee
- URL: http://arxiv.org/abs/2402.16237v1
- Date: Mon, 26 Feb 2024 01:46:56 GMT
- Title: Active Level Set Estimation for Continuous Search Space with Theoretical
Guarantee
- Authors: Giang Ngo, Dang Nguyen, Dat Phan-Trong, Sunil Gupta
- Abstract summary: We propose a novel algorithm that does not need any discretization and can directly work in continuous search spaces.
Our method suggests points by constructing an acquisition function that is defined as a measure of confidence of the function being higher or lower than the given threshold.
On multiple synthetic and real-world datasets, our algorithm successfully outperforms state-of-the-art methods.
- Score: 10.609848119555975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common problem encountered in many real-world applications is level set
estimation where the goal is to determine the region in the function domain
where the function is above or below a given threshold. When the function is
black-box and expensive to evaluate, the level sets need to be found in a
minimum set of function evaluations. Existing methods often assume a discrete
search space with a finite set of data points for function evaluations and
estimating the level sets. When applied to a continuous search space, these
methods often need to first discretize the space which leads to poor results
while needing high computational time. While some methods cater for the
continuous setting, they still lack a proper guarantee for theoretical
convergence. To address this problem, we propose a novel algorithm that does
not need any discretization and can directly work in continuous search spaces.
Our method suggests points by constructing an acquisition function that is
defined as a measure of confidence of the function being higher or lower than
the given threshold. A theoretical analysis for the convergence of the
algorithm to an accurate solution is provided. On multiple synthetic and
real-world datasets, our algorithm successfully outperforms state-of-the-art
methods.
Related papers
- An $(ε,δ)$-accurate level set estimation with a stopping criterion [2.5124069610074895]
This paper introduces an acquisition strategy for level set estimation that incorporates a stopping criterion.
We show that our method satisfies $epsilon$-accuracy with a confidence level of $1 - delta$, addressing a key gap in existing approaches.
arXiv Detail & Related papers (2025-03-26T06:41:07Z) - Single-Loop Deterministic and Stochastic Interior-Point Algorithms for Nonlinearly Constrained Optimization [16.356481969865175]
An interior-point algorithm is proposed, analyzed, and tested for solving objectively constrained continuous optimization problems.
The algorithm is intended for the setting when-gradient, estimates are available and employed in place gradients, and when no objective function values are employed.
arXiv Detail & Related papers (2024-08-29T00:50:35Z) - Inference for an Algorithmic Fairness-Accuracy Frontier [0.9147443443422864]
We provide a consistent estimator for a theoretical fairness-accuracy frontier put forward by Liang, Lu and Mu (2023)
We propose inference methods to test hypotheses that have received much attention in the fairness literature.
We show that the estimated support function converges to a tight process as the sample size increases.
arXiv Detail & Related papers (2024-02-14T00:56:09Z) - The Reinforce Policy Gradient Algorithm Revisited [7.894349646617293]
We revisit the Reinforce policy gradient algorithm from the literature.
We propose a major enhancement to the basic algorithm.
We provide a proof of convergence for this new algorithm.
arXiv Detail & Related papers (2023-10-08T04:05:13Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - A Granular Sieving Algorithm for Deterministic Global Optimization [6.01919376499018]
A gradient-free deterministic method is developed to solve global optimization problems for Lipschitz continuous functions.
The method can be regarded as granular sieving with synchronous analysis in both the domain and range of the objective function.
arXiv Detail & Related papers (2021-07-14T10:03:03Z) - Linear Convergence of Distributed Mirror Descent with Integral Feedback
for Strongly Convex Problems [11.498089180181365]
We study a continuous-time decentralized mirror descent that uses purely local gradient information to converge to the global optimal solution.
We prove that the algorithm indeed achieves (local) exponential convergence.
arXiv Detail & Related papers (2020-11-24T17:27:27Z) - Aligning Partially Overlapping Point Sets: an Inner Approximation
Algorithm [80.15123031136564]
We propose a robust method to align point sets where there is no prior information about the value of the transformation.
Our algorithm does not need regularization on transformation, and thus can handle the situation where there is no prior information about the values of the transformations.
Experimental results demonstrate the better robustness of the proposed method over state-of-the-art algorithms.
arXiv Detail & Related papers (2020-07-05T15:23:33Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.