Nonlinear Granger Causality using Kernel Ridge Regression
- URL: http://arxiv.org/abs/2309.05107v1
- Date: Sun, 10 Sep 2023 18:28:48 GMT
- Title: Nonlinear Granger Causality using Kernel Ridge Regression
- Authors: Wojciech "Victor" Fulmyk
- Abstract summary: I introduce a novel algorithm and accompanying Python library, named mlcausality, designed for the identification of nonlinear Granger causal relationships.
I conduct a comprehensive performance analysis of mlcausality when the prediction regressor is the kernel ridge regressor with the radial basis function kernel.
Results demonstrate that mlcausality employing kernel ridge regression achieves competitive AUC scores across a diverse set of simulated data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: I introduce a novel algorithm and accompanying Python library, named
mlcausality, designed for the identification of nonlinear Granger causal
relationships. This novel algorithm uses a flexible plug-in architecture that
enables researchers to employ any nonlinear regressor as the base prediction
model. Subsequently, I conduct a comprehensive performance analysis of
mlcausality when the prediction regressor is the kernel ridge regressor with
the radial basis function kernel. The results demonstrate that mlcausality
employing kernel ridge regression achieves competitive AUC scores across a
diverse set of simulated data. Furthermore, mlcausality with kernel ridge
regression yields more finely calibrated $p$-values in comparison to rival
algorithms. This enhancement enables mlcausality to attain superior accuracy
scores when using intuitive $p$-value-based thresholding criteria. Finally,
mlcausality with the kernel ridge regression exhibits significantly reduced
computation times compared to existing nonlinear Granger causality algorithms.
In fact, in numerous instances, this innovative approach achieves superior
solutions within computational timeframes that are an order of magnitude
shorter than those required by competing algorithms.
Related papers
- Calibrated Computation-Aware Gaussian Processes [1.1470070927586018]
We propose a new CAGP framework, CAGP-GS, based on using Gauss-Seidel iterations for the underlying probabilistic linear solver.
We test the calibratedness on a synthetic problem, and compare the performance to existing approaches on a large-scale global temperature regression problem.
arXiv Detail & Related papers (2024-10-11T13:30:08Z) - Agnostic Learning of Mixed Linear Regressions with EM and AM Algorithms [22.79595679373698]
Mixed linear regression is a well-studied problem in statistics and machine learning.
In this paper, we consider the more general problem of learning of mixed linear regression from samples.
We show that the AM and EM algorithms lead to learning in mixed linear regression by converging to the population loss minimizers.
arXiv Detail & Related papers (2024-06-03T09:43:24Z) - Robust Capped lp-Norm Support Vector Ordinal Regression [85.84718111830752]
Ordinal regression is a specialized supervised problem where the labels show an inherent order.
Support Vector Ordinal Regression, as an outstanding ordinal regression model, is widely used in many ordinal regression tasks.
We introduce a new model, Capped $ell_p$-Norm Support Vector Ordinal Regression(CSVOR), that is robust to outliers.
arXiv Detail & Related papers (2024-04-25T13:56:05Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Improved Convergence Rates for Sparse Approximation Methods in
Kernel-Based Learning [48.08663378234329]
Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications.
Existing sparse approximation methods can yield a significant reduction in the computational cost.
We provide novel confidence intervals for the Nystr"om method and the sparse variational Gaussian processes approximation method.
arXiv Detail & Related papers (2022-02-08T17:22:09Z) - Robust Regression Revisited: Acceleration and Improved Estimation Rates [25.54653340884806]
We study fast algorithms for statistical regression problems under the strong contamination model.
The goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples.
We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees.
arXiv Detail & Related papers (2021-06-22T17:21:56Z) - Sparse Bayesian Learning via Stepwise Regression [1.2691047660244335]
We propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP)
As its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression.
We derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP.
arXiv Detail & Related papers (2021-06-11T00:20:27Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Multiscale Non-stationary Stochastic Bandits [83.48992319018147]
We propose a novel multiscale changepoint detection method for the non-stationary linear bandit problems, called Multiscale-LinUCB.
Experimental results show that our proposed Multiscale-LinUCB algorithm outperforms other state-of-the-art algorithms in non-stationary contextual environments.
arXiv Detail & Related papers (2020-02-13T00:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.