SOCKS: A Stochastic Optimal Control and Reachability Toolbox Using
Kernel Methods
- URL: http://arxiv.org/abs/2203.06290v1
- Date: Sat, 12 Mar 2022 00:09:08 GMT
- Title: SOCKS: A Stochastic Optimal Control and Reachability Toolbox Using
Kernel Methods
- Authors: Adam J. Thorpe, Meeko M. K. Oishi
- Abstract summary: SOCKS is a data-driven optimal control toolbox based in kernel methods.
We present the main features of SOCKS and demonstrate its capabilities on several benchmarks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SOCKS, a data-driven stochastic optimal control toolbox based in
kernel methods. SOCKS is a collection of data-driven algorithms that compute
approximate solutions to stochastic optimal control problems with arbitrary
cost and constraint functions, including stochastic reachability, which seeks
to determine the likelihood that a system will reach a desired target set while
respecting a set of pre-defined safety constraints. Our approach relies upon a
class of machine learning algorithms based in kernel methods, a nonparametric
technique which can be used to represent probability distributions in a
high-dimensional space of functions known as a reproducing kernel Hilbert
space. As a nonparametric technique, kernel methods are inherently data-driven,
meaning that they do not place prior assumptions on the system dynamics or the
structure of the uncertainty. This makes the toolbox amenable to a wide variety
of systems, including those with nonlinear dynamics, black-box elements, and
poorly characterized stochastic disturbances. We present the main features of
SOCKS and demonstrate its capabilities on several benchmarks.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Kernel Alignment for Unsupervised Feature Selection via Matrix Factorization [8.020732438595905]
unsupervised feature selection has been proven effective in alleviating the so-called curse of dimensionality.
Most existing matrix factorization-based unsupervised feature selection methods are built upon subspace learning.
In this paper, we construct a model by integrating kernel functions and kernel alignment, which can be equivalently characterized as a matrix factorization problem.
By doing so, our model can learn both linear and nonlinear similarity information and automatically generate the most appropriate kernel.
arXiv Detail & Related papers (2024-03-13T20:35:44Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - RFFNet: Large-Scale Interpretable Kernel Methods via Random Fourier Features [3.0079490585515347]
We introduce RFFNet, a scalable method that learns the kernel relevances' on the fly via first-order optimization.
We show that our approach has a small memory footprint and run-time, low prediction error, and effectively identifies relevant features.
We supply users with an efficient, PyTorch-based library, that adheres to the scikit-learn standard API and code for fully reproducing our results.
arXiv Detail & Related papers (2022-11-11T18:50:34Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Random features for adaptive nonlinear control and prediction [15.354147587211031]
We propose a tractable algorithm for both adaptive control and adaptive prediction.
We approximate the unknown dynamics with a finite expansion in $textitrandom$ basis functions.
Remarkably, our explicit bounds only depend $textitpolynomially$ on the underlying parameters of the system.
arXiv Detail & Related papers (2021-06-07T13:15:40Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.