From Classical Probabilistic Latent Variable Models to Modern Generative AI: A Unified Perspective
- URL: http://arxiv.org/abs/2508.16643v1
- Date: Mon, 18 Aug 2025 11:02:32 GMT
- Title: From Classical Probabilistic Latent Variable Models to Modern Generative AI: A Unified Perspective
- Authors: Tianhua Chen,
- Abstract summary: Generative Artificial Intelligence (AI) now underpins state-of-the-art systems.<n>Despite their varied architectures, many share a common foundation in probabilistic latent variable models (PLVMs)<n>This paper presents a unified perspective by framing both classical and modern generative methods within the PLVM paradigm.
- Score: 1.482314366716538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From large language models to multi-modal agents, Generative Artificial Intelligence (AI) now underpins state-of-the-art systems. Despite their varied architectures, many share a common foundation in probabilistic latent variable models (PLVMs), where hidden variables explain observed data for density estimation, latent reasoning, and structured inference. This paper presents a unified perspective by framing both classical and modern generative methods within the PLVM paradigm. We trace the progression from classical flat models such as probabilistic PCA, Gaussian mixture models, latent class analysis, item response theory, and latent Dirichlet allocation, through their sequential extensions including Hidden Markov Models, Gaussian HMMs, and Linear Dynamical Systems, to contemporary deep architectures: Variational Autoencoders as Deep PLVMs, Normalizing Flows as Tractable PLVMs, Diffusion Models as Sequential PLVMs, Autoregressive Models as Explicit Generative Models, and Generative Adversarial Networks as Implicit PLVMs. Viewing these architectures under a common probabilistic taxonomy reveals shared principles, distinct inference strategies, and the representational trade-offs that shape their strengths. We offer a conceptual roadmap that consolidates generative AI's theoretical foundations, clarifies methodological lineages, and guides future innovation by grounding emerging architectures in their probabilistic heritage.
Related papers
- The Trinity of Consistency as a Defining Principle for General World Models [106.16462830681452]
General World Models are capable of learning, simulating, and reasoning about objective physical laws.<n>We propose a principled theoretical framework that defines the essential properties requisite for a General World Model.<n>Our work establishes a principled pathway toward general world models, clarifying both the limitations of current systems and the architectural requirements for future progress.
arXiv Detail & Related papers (2026-02-26T16:15:55Z) - On the Relation of State Space Models and Hidden Markov Models [0.07646713951724009]
State Space Models (SSMs) and Hidden Markov Models (HMMs) are foundational frameworks for modeling sequential data with latent variables.<n>Recent deterministic state space models have re-emerged in natural language processing through architectures such as S4 and Mamba.
arXiv Detail & Related papers (2026-01-19T19:51:05Z) - Deep generative models as the probability transformation functions [0.0]
This paper introduces a unified theoretical perspective that views deep generative models as probability transformation functions.<n>We demonstrate that they all fundamentally operate by transforming simple predefined distributions into complex target data distributions.
arXiv Detail & Related papers (2025-06-20T17:22:23Z) - Continual Learning for Generative AI: From LLMs to MLLMs and Beyond [56.29231194002407]
We present a comprehensive survey of continual learning methods for mainstream generative AI models.<n>We categorize these approaches into three paradigms: architecture-based, regularization-based, and replay-based.<n>We analyze continual learning setups for different generative models, including training objectives, benchmarks, and core backbones.
arXiv Detail & Related papers (2025-06-16T02:27:25Z) - Multi-Scale Probabilistic Generation Theory: A Unified Information-Theoretic Framework for Hierarchical Structure in Large Language Models [1.0117553823134735]
Large Language Models (LLMs) exhibit remarkable emergent abilities but remain poorly understood at a mechanistic level.<n>This paper introduces the Multi-Scale Probabilistic Generation Theory (MSPGT)<n>MSPGT posits that standard language modeling objectives implicitly optimize multi-scale information compression.
arXiv Detail & Related papers (2025-05-23T16:55:35Z) - A Survey of Model Architectures in Information Retrieval [59.61734783818073]
The period from 2019 to the present has represented one of the biggest paradigm shifts in information retrieval (IR) and natural language processing (NLP)<n>We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)<n>We conclude with a forward-looking discussion of emerging challenges and future directions.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks [50.29356570858905]
We introduce the Dynamical Systems Framework (DSF), which allows a principled investigation of all these architectures in a common representation.<n>We provide principled comparisons between softmax attention and other model classes, discussing the theoretical conditions under which softmax attention can be approximated.<n>This shows the DSF's potential to guide the systematic development of future more efficient and scalable foundation models.
arXiv Detail & Related papers (2024-05-24T17:19:57Z) - Simulation of emergence in artificial societies: a practical model-based
approach with the EB-DEVS formalism [0.11470070927586014]
We apply EB-DEVS, a novel formalism tailored for the modelling, simulation and live identification of emergent properties.
This work provides case study-driven evidence for the neatness and compactness of the approach to modelling communication structures.
arXiv Detail & Related papers (2021-10-15T15:55:16Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Struct-MMSB: Mixed Membership Stochastic Blockmodels with Interpretable
Structured Priors [13.712395104755783]
Mixed membership blockmodel (MMSB) is a popular framework for community detection and network generation.
We present a flexible MMSB model, textitStruct-MMSB, that uses a recently developed statistical relational learning model, hinge-loss Markov random fields (HL-MRFs)
Our model is capable of learning latent characteristics in real-world networks via meaningful latent variables encoded as a complex combination of observed features and membership distributions.
arXiv Detail & Related papers (2020-02-21T19:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.