Tractable Representation Learning with Probabilistic Circuits
- URL: http://arxiv.org/abs/2507.04385v2
- Date: Sat, 26 Jul 2025 11:51:01 GMT
- Title: Tractable Representation Learning with Probabilistic Circuits
- Authors: Steven Braun, Sahil Sidheekh, Antonio Vergari, Martin Mundt, Sriraam Natarajan, Kristian Kersting,
- Abstract summary: Probabilistic circuits (PCs) are powerful probabilistic models that enable exact and tractable inference.<n>While dominant in neural networks, representation learning with PCs remains underexplored.<n>We introduce autoencoding probabilistic circuits (APCs), a novel framework leveraging the tractability of PCs to model probabilistic embeddings explicitly.
- Score: 30.116247936061395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic circuits (PCs) are powerful probabilistic models that enable exact and tractable inference, making them highly suitable for probabilistic reasoning and inference tasks. While dominant in neural networks, representation learning with PCs remains underexplored, with prior approaches relying on external neural embeddings or activation-based encodings. To address this gap, we introduce autoencoding probabilistic circuits (APCs), a novel framework leveraging the tractability of PCs to model probabilistic embeddings explicitly. APCs extend PCs by jointly modeling data and embeddings, obtaining embedding representations through tractable probabilistic inference. The PC encoder allows the framework to natively handle arbitrary missing data and is seamlessly integrated with a neural decoder in a hybrid, end-to-end trainable architecture enabled by differentiable sampling. Our empirical evaluation demonstrates that APCs outperform existing PC-based autoencoding methods in reconstruction quality, generate embeddings competitive with, and exhibit superior robustness in handling missing data compared to neural autoencoders. These results highlight APCs as a powerful and flexible representation learning method that exploits the probabilistic inference capabilities of PCs, showing promising directions for robust inference, out-of-distribution detection, and knowledge distillation.
Related papers
- Probabilistic Circuits with Constraints via Convex Optimization [2.6436521007616114]
The proposed approach takes both a PC and constraints as inputs, and outputs a new PC that satisfies the constraints.
Empirical evaluations indicate that the combination of constraints and PCs can have multiple use cases.
arXiv Detail & Related papers (2024-03-19T19:55:38Z) - Probabilistic Neural Circuits [4.724177741282789]
Probabilistic neural circuits (PNCs) strike a balance between PCs and neural nets in terms of tractability and expressive power.
We show that PNCs can be interpreted as deep mixtures of Bayesian networks.
arXiv Detail & Related papers (2024-03-10T15:25:49Z) - Structured Probabilistic Coding [28.46046583495838]
This paper presents a new supervised representation learning framework, namely structured probabilistic coding (SPC)
SPC is an encoder-only probabilistic coding technology with a structured regularization from the target space.
It can enhance the generalization ability of pre-trained language models for better language understanding.
arXiv Detail & Related papers (2023-12-21T15:28:02Z) - Pruning-Based Extraction of Descriptions from Probabilistic Circuits [5.322838001065884]
We use a probabilistic circuit to learn a concept from positively labelled and unlabelled examples.
These circuits form an attractive tractable model for this task, but it is challenging for a domain expert to inspect and analyse them.
We propose to resolve this by converting a learned probabilistic circuit into a logic-based discriminative model.
arXiv Detail & Related papers (2023-11-22T13:19:45Z) - Synaptic Sampling of Neural Networks [0.14732811715354452]
This paper describes the scANN technique -- textit (by coinflips) artificial neural networks -- which enables neural networks to be sampled directly by treating the weights as Bernoulli coin flips.
arXiv Detail & Related papers (2023-11-21T22:56:13Z) - ConvBKI: Real-Time Probabilistic Semantic Mapping Network with Quantifiable Uncertainty [7.537718151195062]
We develop a modular neural network for real-time colorblack(> 10 Hz) semantic mapping in uncertain environments.
Our approach combines the reliability of classical probabilistic algorithms with the performance and efficiency of modern neural networks.
arXiv Detail & Related papers (2023-10-24T17:30:26Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Online Learning Probabilistic Event Calculus Theories in Answer Set
Programming [70.06301658267125]
Event Recognition (CER) systems detect occurrences in streaming time-stamped datasets using predefined event patterns.
We present a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of rules weighted in the Event Calculus.
Our results demonstrate the superiority of our novel approach, both terms efficiency and predictive.
arXiv Detail & Related papers (2021-03-31T23:16:29Z) - Probabilistic Generating Circuits [50.98473654244851]
We propose probabilistic generating circuits (PGCs) for their efficient representation.
PGCs are not just a theoretical framework that unifies vastly different existing models, but also show huge potential in modeling realistic data.
We exhibit a simple class of PGCs that are not trivially subsumed by simple combinations of PCs and DPPs, and obtain competitive performance on a suite of density estimation benchmarks.
arXiv Detail & Related papers (2021-02-19T07:06:53Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems [2.1320960069210484]
The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP)
We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set.
Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet.
arXiv Detail & Related papers (2020-03-11T04:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.