OASIS: An Active Framework for Set Inversion
- URL: http://arxiv.org/abs/2105.15024v1
- Date: Mon, 31 May 2021 15:04:43 GMT
- Title: OASIS: An Active Framework for Set Inversion
- Authors: Binh T. Nguyen, Duy M. Nguyen, Lam Si Tung Ho, Vu Dinh
- Abstract summary: We introduce a novel method for solving the set inversion problem by formulating it as a binary classification problem.
We focus on active learning, a family of new and powerful techniques which can achieve the same level of accuracy with fewer data points compared to traditional learning methods.
- Score: 4.014524824655106
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we introduce a novel method for solving the set inversion
problem by formulating it as a binary classification problem. Aiming to develop
a fast algorithm that can work effectively with high-dimensional and
computationally expensive nonlinear models, we focus on active learning, a
family of new and powerful techniques which can achieve the same level of
accuracy with fewer data points compared to traditional learning methods.
Specifically, we propose OASIS, an active learning framework using Support
Vector Machine algorithms for solving the problem of set inversion. Our method
works well in high dimensions and its computational cost is relatively robust
to the increase of dimension. We illustrate the performance of OASIS by several
simulation studies and show that our algorithm outperforms VISIA, the
state-of-the-art method.
Related papers
- Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Neural Algorithmic Reasoning Without Intermediate Supervision [21.852775399735005]
We focus on learning neural algorithmic reasoning only from the input-output pairs without appealing to the intermediate supervision.
We build a self-supervised objective that can regularise intermediate computations of the model without access to the algorithm trajectory.
We demonstrate that our approach is competitive to its trajectory-supervised counterpart on tasks from the CLRSic Algorithmic Reasoning Benchmark.
arXiv Detail & Related papers (2023-06-23T09:57:44Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Fast Instrument Learning with Faster Rates [34.271656281468175]
We propose a simple algorithm which combines kernelized IV methods and an arbitrary, adaptive regression algorithm, accessed as a black box.
Our algorithm enjoys faster-rate convergence and adapts to the dimensionality of informative latent features, while avoiding an expensive minimax optimization procedure.
arXiv Detail & Related papers (2022-05-22T08:06:54Z) - Learning for Spatial Branching: An Algorithm Selection Approach [0.0]
We develop a learning framework for branching and show its efficacy in the context of non-linear optimization problems.
The proposed learning is performed offline, based on instance-specific features and with no computational overhead when solving new instances.
Experiments on different benchmark instances from the show literature that the learning-based branching rule significantly outperforms the standard rules.
arXiv Detail & Related papers (2022-04-22T17:23:43Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.