Global Convergence of Online Identification for Mixed Linear Regression
- URL: http://arxiv.org/abs/2311.18506v1
- Date: Thu, 30 Nov 2023 12:30:42 GMT
- Title: Global Convergence of Online Identification for Mixed Linear Regression
- Authors: Yujing Liu, Zhixin Liu, and Lei Guo
- Abstract summary: Mixed linear regression (MLR) is a powerful model for characterizing nonlinear relationships.
This paper investigates the online identification and data clustering problems for two basic classes of MLRs.
It introduces two corresponding new online identification algorithms based on the expectation-maximization principle.
- Score: 1.9295130374196499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixed linear regression (MLR) is a powerful model for characterizing
nonlinear relationships by utilizing a mixture of linear regression sub-models.
The identification of MLR is a fundamental problem, where most of the existing
results focus on offline algorithms, rely on independent and identically
distributed (i.i.d) data assumptions, and provide local convergence results
only. This paper investigates the online identification and data clustering
problems for two basic classes of MLRs, by introducing two corresponding new
online identification algorithms based on the expectation-maximization (EM)
principle. It is shown that both algorithms will converge globally without
resorting to the traditional i.i.d data assumptions. The main challenge in our
investigation lies in the fact that the gradient of the maximum likelihood
function does not have a unique zero, and a key step in our analysis is to
establish the stability of the corresponding differential equation in order to
apply the celebrated Ljung's ODE method. It is also shown that the
within-cluster error and the probability that the new data is categorized into
the correct cluster are asymptotically the same as those in the case of known
parameters. Finally, numerical simulations are provided to verify the
effectiveness of our online algorithms.
Related papers
- Agnostic Learning of Mixed Linear Regressions with EM and AM Algorithms [22.79595679373698]
Mixed linear regression is a well-studied problem in statistics and machine learning.
In this paper, we consider the more general problem of learning of mixed linear regression from samples.
We show that the AM and EM algorithms lead to learning in mixed linear regression by converging to the population loss minimizers.
arXiv Detail & Related papers (2024-06-03T09:43:24Z) - Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear Regression [5.883916678819683]
We study the trajectory of iterations and the convergence rates of the Expectation-Maximization (EM) algorithm for two-component Mixed Linear Regression (2MLR)
Recent results have established the super-linear convergence of EM for 2MLR in the noiseless and high SNR settings.
arXiv Detail & Related papers (2024-05-28T14:46:20Z) - Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models [2.0738462952016232]
Methods based on functional causal models can identify a unique graph, but suffer from the curse of dimensionality or impose strong parametric assumptions.
We propose a novel hybrid approach for global causal discovery in observational data that leverages local causal substructures.
We provide theoretical guarantees for correctness and worst-case time complexities, with empirical validation on synthetic data.
arXiv Detail & Related papers (2024-05-23T12:28:16Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Fundamental limits and algorithms for sparse linear regression with
sublinear sparsity [16.3460693863947]
We establish exact expressions for the normalized mutual information and minimum mean-square-error (MMSE) of sparse linear regression.
We show how to modify the existing well-known AMP algorithms for linear regimes to sub-linear ones.
arXiv Detail & Related papers (2021-01-27T01:27:03Z) - A connection between the pattern classification problem and the General
Linear Model for statistical inference [0.2320417845168326]
Both approaches, i.e. GLM and LRM, apply to different domains, the observation and the label domains.
We derive a statistical test based on a more refined predictive algorithm.
The MLE-based inference employs a residual score and includes the upper bound to compute a better estimation of the actual (real) error.
arXiv Detail & Related papers (2020-12-16T12:26:26Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.