Constructing Segmented Differentiable Quadratics to Determine
Algorithmic Run Times and Model Non-Polynomial Functions
- URL: http://arxiv.org/abs/2012.01420v1
- Date: Thu, 3 Dec 2020 00:22:49 GMT
- Title: Constructing Segmented Differentiable Quadratics to Determine
Algorithmic Run Times and Model Non-Polynomial Functions
- Authors: Ananth Goyal
- Abstract summary: We propose an approach to determine the continual progression of algorithmic efficiency.
The proposed method can effectively determine the run time behavior $F$ at any given index $x$.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an approach to determine the continual progression of algorithmic
efficiency, as an alternative to standard calculations of time complexity,
likely, but not exclusively, when dealing with data structures with unknown
maximum indexes and with algorithms that are dependent on multiple variables
apart from just input size. The proposed method can effectively determine the
run time behavior $F$ at any given index $x$ , as well as $\frac{\partial
F}{\partial x}$, as a function of only one or multiple arguments, by combining
$\frac{n}{2}$ quadratic segments, based upon the principles of Lagrangian
Polynomials and their respective secant lines. Although the approach used is
designed for analyzing the efficacy of computational algorithms, the proposed
method can be used within the pure mathematical field as a novel way to
construct non-polynomial functions, such as $\log_2{n}$ or $\frac{n+1}{n-2}$,
as a series of segmented differentiable quadratics to model functional behavior
and reoccurring natural patterns. After testing, our method had an average
accuracy of above of 99\% with regard to functional resemblance.
Related papers
- Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Private graphon estimation via sum-of-squares [10.00024942014117]
We develop the first pure node-differentially private algorithms for learning block models and for graphon estimation with constant running time for any number of blocks.
The statistical utility guarantees match those of the previous best information-theoretic (expon-time) node-private mechanisms for these problems.
arXiv Detail & Related papers (2024-03-18T19:54:59Z) - Learning distributed representations with efficient SoftMax normalization [3.8673630752805437]
We propose a linear-time approximation to compute the normalization constants of $rm SoftMax(XYT)$ for embedding vectors with bounded norms.<n>We show on some pre-trained embedding datasets that the proposed estimation method achieves higher or comparable accuracy with competing methods.<n>The proposed algorithm is interpretable and easily adapted to arbitrary embedding problems.
arXiv Detail & Related papers (2023-03-30T15:48:26Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Efficient One Sided Kolmogorov Approximation [7.657378889055477]
The main application that we examine is estimation of the probability missing deadlines in series-parallel schedules.
Since exact computation of these probabilities is NP-hard, we propose to use the algorithms described in this paper to obtain an approximation.
arXiv Detail & Related papers (2022-07-14T10:03:02Z) - Nearly Optimal Regret for Learning Adversarial MDPs with Linear Function
Approximation [92.3161051419884]
We study the reinforcement learning for finite-horizon episodic Markov decision processes with adversarial reward and full information feedback.
We show that it can achieve $tildeO(dHsqrtT)$ regret, where $H$ is the length of the episode.
We also prove a matching lower bound of $tildeOmega(dHsqrtT)$ up to logarithmic factors.
arXiv Detail & Related papers (2021-02-17T18:54:08Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - DiffPrune: Neural Network Pruning with Deterministic Approximate Binary
Gates and $L_0$ Regularization [0.0]
Modern neural network architectures typically have many millions of parameters and can be pruned significantly without substantial loss in effectiveness.
The contribution of this work is two-fold.
The first is a method for approximating a multivariate Bernoulli random variable by means of a deterministic and differentiable transformation of any real-valued random variable.
The second is a method for model selection by element-wise parameters with approximate binary gates that may be computed deterministically or multiplicationally and take on exact zero values.
arXiv Detail & Related papers (2020-12-07T13:08:56Z) - Empirical Risk Minimization in the Non-interactive Local Model of
Differential Privacy [26.69391745812235]
We study the Empirical Risk Minimization (ERM) problem in the noninteractive Local Differential Privacy (LDP) model.
Previous research indicates that the sample complexity, to achieve error $alpha, needs to be depending on dimensionality $p$ for general loss functions.
arXiv Detail & Related papers (2020-11-11T17:48:00Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.