Estimation methods of Matrix-valued AR model
- URL: http://arxiv.org/abs/2505.15220v1
- Date: Wed, 21 May 2025 07:47:05 GMT
- Title: Estimation methods of Matrix-valued AR model
- Authors: Kamil KoĆodziejski,
- Abstract summary: This article proposes novel estimation methods for the Matrix Autoregressive (MAR) model.<n>The MAR model offers a parsimonious, yet effective, alternative for high-dimensional time series.<n> Empirical results demonstrate that MAR models estimated via the proposed methods achieve a comparable fit to VAR models across metrics such as MAE and RMSE.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article proposes novel estimation methods for the Matrix Autoregressive (MAR) model, specifically adaptations of the Yule-Walker equations and Burg's method, addressing limitations in existing techniques. The MAR model, by maintaining a matrix structure and requiring significantly fewer parameters than vector autoregressive (VAR) models, offers a parsimonious, yet effective, alternative for high-dimensional time series. Empirical results demonstrate that MAR models estimated via the proposed methods achieve a comparable fit to VAR models across metrics such as MAE and RMSE. These findings underscore the utility of Yule-Walker and Burg-type estimators in constructing efficient and interpretable models for complex temporal data.
Related papers
- Reinforced Model Merging [53.84354455400038]
We present an innovative framework termed Reinforced Model Merging (RMM), which encompasses an environment and agent tailored for merging tasks.<n>By utilizing data subsets during the evaluation process, we addressed the bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100 times.
arXiv Detail & Related papers (2025-03-27T08:52:41Z) - Exploring Patterns Behind Sports [3.2838877620203935]
This paper presents a comprehensive framework for time series prediction using a hybrid model that combines ARIMA and LSTM.<n>The model incorporates feature engineering techniques, including embedding and PCA, to transform raw data into a lower-dimensional representation.
arXiv Detail & Related papers (2025-02-11T11:51:07Z) - AI/ML-Based Automatic Modulation Recognition: Recent Trends and Future Possibilities [0.0]
We present a review of high-performance automatic modulation recognition (AMR) models proposed in the literature to classify various Radio Frequency (RF) modulation schemes.<n>We replicated these models and compared their performance in terms of accuracy across a range of signal-to-noise ratios.
arXiv Detail & Related papers (2025-02-07T20:34:04Z) - Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation [54.50526986788175]
Recent advances in efficient sequence modeling have led to attention-free layers, such as Mamba, RWKV, and various gated RNNs.
We present a unified view of these models, formulating such layers as implicit causal self-attention layers.
Our framework compares the underlying mechanisms on similar grounds for different layers and provides a direct means for applying explainability methods.
arXiv Detail & Related papers (2024-05-26T09:57:45Z) - A Two-Scale Complexity Measure for Deep Learning Models [2.512406961007489]
We introduce a novel capacity measure 2sED for statistical models based on the effective dimension.<n>The new quantity provably bounds the generalization error under mild assumptions on the model.<n> simulations on standard data sets and popular model architectures show that 2sED correlates well with the training error.
arXiv Detail & Related papers (2024-01-17T12:50:50Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Generalized generalized linear models: Convex estimation and online
bounds [11.295523372922533]
We introduce inequalities in a class of models (GL-based) models (GGLM)
The proposed approach uses the operator-based approach to overcome nontemporal variation among models.
We demonstrate the performance using numerical simulations and a real data example for incidents.
arXiv Detail & Related papers (2023-04-26T19:19:42Z) - An Interpretable and Efficient Infinite-Order Vector Autoregressive
Model for High-Dimensional Time Series [1.4939176102916187]
This paper proposes a novel sparse infinite-order VAR model for high-dimensional time series.
The temporal and cross-sectional structures of the VARMA-type dynamics captured by this model can be interpreted separately.
Greater statistical efficiency and interpretability can be achieved with little loss of temporal information.
arXiv Detail & Related papers (2022-09-02T17:14:24Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.