Chaotic Hedging with Iterated Integrals and Neural Networks
- URL: http://arxiv.org/abs/2209.10166v3
- Date: Wed, 17 Jul 2024 16:16:15 GMT
- Title: Chaotic Hedging with Iterated Integrals and Neural Networks
- Authors: Ariel Neufeld, Philipp Schmocker,
- Abstract summary: We show that every $p$-integrable functional of the semimartingale, for $p in [1,infty$, can be represented as a sum of iterated integrals thereof.
We also show that every financial derivative can be approximated arbitrarily well in the $Lp$-sense.
- Score: 3.3379026542599934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we extend the Wiener-Ito chaos decomposition to the class of continuous semimartingales that are exponentially integrable, which includes in particular affine and some polynomial diffusion processes. By omitting the orthogonality in the expansion, we are able to show that every $p$-integrable functional of the semimartingale, for $p \in [1,\infty)$, can be represented as a sum of iterated integrals thereof. Using finitely many terms of this expansion and (possibly random) neural networks for the integrands, whose parameters are learned in a machine learning setting, we show that every financial derivative can be approximated arbitrarily well in the $L^p$-sense. In particular, for $p = 2$, we recover the optimal hedging strategy in the sense of quadratic hedging. Moreover, since the hedging strategy of the approximating option can be computed in closed form, we obtain an efficient algorithm to approximately replicate any sufficiently integrable financial derivative within short runtime.
Related papers
- Robust learning of halfspaces under log-concave marginals [6.852292115526837]
We give an algorithm that learns linear threshold functions and returns a classifier with boundary volume $O(r+varepsilon)$ at radius perturbation $r$.<n>The time and sample complexity of $dtildeO (1/varepsilon2)$ matches the complexity of Boolean regression.
arXiv Detail & Related papers (2025-05-19T20:12:16Z) - Kolmogorov-Arnold Networks: Approximation and Learning Guarantees for Functions and their Derivatives [8.517406772939292]
Kolmogorov-Arnold Networks (KANs) have emerged as an improved backbone for most deep learning frameworks.
We show that KANs can optimally approximate any Besov function in $Bs_p,q(mathcalX)$ on a bounded open, or even fractal, domain.
We complement our approximation guarantee with a dimension-free estimate on the sample complexity of a residual KAN model.
arXiv Detail & Related papers (2025-04-21T14:02:59Z) - Towards a Sharp Analysis of Offline Policy Learning for $f$-Divergence-Regularized Contextual Bandits [49.96531901205305]
We analyze $f$-divergence-regularized offline policy learning.<n>For reverse Kullback-Leibler (KL) divergence, we give the first $tildeO(epsilon-1)$ sample complexity under single-policy concentrability.<n>We extend our analysis to dueling bandits, and we believe these results take a significant step toward a comprehensive understanding of $f$-divergence-regularized policy learning.
arXiv Detail & Related papers (2025-02-09T22:14:45Z) - More on the Operator Space Entanglement (OSE): Rényi OSE, revivals, and integrability breaking [0.0]
We investigate the dynamics of the R'enyi Operator Spaceanglement ($OSE$) entropies $S_n$ across several one-dimensional integrable and chaotic models.<n>Our numerical results reveal that the R'enyi $OSE$ entropies of diagonal operators with nonzero trace saturate at long times.<n>In finite-size integrable systems, $S_n$ exhibit strong revivals, which are washed out when integrability is broken.
arXiv Detail & Related papers (2024-10-24T17:17:29Z) - Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension [17.485243410774814]
In traditional models of supervised learning, the goal of a learner is to output a hypothesis that is competitive (to within $epsilon$) of the best fitting concept from some class.
We introduce a smoothed-analysis framework that requires a learner to compete only with the best agnostic.
We obtain the first algorithm forally learning intersections of $k$-halfspaces in time.
arXiv Detail & Related papers (2024-07-01T04:58:36Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Adversarial Contextual Bandits Go Kernelized [21.007410990554522]
We study a generalization of the problem of online learning in adversarial linear contextual bandits by incorporating loss functions that belong to a Hilbert kernel space.
We propose a new optimistically biased estimator for the loss functions and reproducing near-optimal regret guarantees.
arXiv Detail & Related papers (2023-10-02T19:59:39Z) - A note on $L^1$-Convergence of the Empiric Minimizer for unbounded
functions with fast growth [0.0]
For $V : mathbbRd to mathbbR$ coercive, we study the convergence rate for the $L1$-distance of the empiric minimizer.
We show that in general, for unbounded functions with fast growth, the convergence rate is bounded above by $a_n n-1/q$, where $q$ is the dimension of the latent random variable.
arXiv Detail & Related papers (2023-03-08T08:46:13Z) - Universality in the tripartite information after global quenches:
(generalised) quantum XY models [0.0]
We consider the R'enyi-$alpha$ tripartite information $I_3(alpha)$ of three adjacent subsystems in the stationary state emerging after global quenches in noninteracting spin chains from both homogeneous and bipartite states.
We identify settings in which $I_3(alpha)$ remains nonzero also in the limit of infinite lengths and develop an effective quantum field theory description of free fermionic fields on a ladder.
arXiv Detail & Related papers (2023-02-02T18:50:42Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Computing Anti-Derivatives using Deep Neural Networks [3.42658286826597]
This paper presents a novel algorithm to obtain the closed-form anti-derivative of a function using Deep Neural Network architecture.
We claim that using a single method for all integrals, our algorithm can approximate anti-derivatives to any required accuracy.
This paper also shows the applications of our method to get the closed-form expressions of elliptic integrals, Fermi-Dirac integrals, and cumulative distribution functions.
arXiv Detail & Related papers (2022-09-19T15:16:47Z) - Near-Optimal No-Regret Learning for General Convex Games [121.50979258049135]
We show that regret can be obtained for general convex and compact strategy sets.
Our dynamics are on an instantiation of optimistic follow-the-regularized-bounds over an appropriately emphlifted space.
Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret.
arXiv Detail & Related papers (2022-06-17T12:58:58Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Last iterate convergence of SGD for Least-Squares in the Interpolation
regime [19.05750582096579]
We study the noiseless model in the fundamental least-squares setup.
We assume that an optimum predictor fits perfectly inputs and outputs $langle theta_*, phi(X) rangle = Y$, where $phi(X)$ stands for a possibly infinite dimensional non-linear feature map.
arXiv Detail & Related papers (2021-02-05T14:02:20Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - $k$-Forrelation Optimally Separates Quantum and Classical Query
Complexity [3.4984289152418753]
We show that any partial function on $N$ bits can be computed with an advantage $delta$ over a random guess by making $q$ quantum queries.
We also conjectured the $k$-Forrelation problem -- a partial function that can be computed with $q = lceil k/2 rceil$ quantum queries.
arXiv Detail & Related papers (2020-08-16T21:26:46Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.