Structured Optimal Variational Inference for Dynamic Latent Space Models
- URL: http://arxiv.org/abs/2209.15117v1
- Date: Thu, 29 Sep 2022 22:10:42 GMT
- Title: Structured Optimal Variational Inference for Dynamic Latent Space Models
- Authors: Peng Zhao, Anirban Bhattacharya, Debdeep Pati and Bani K. Mallick
- Abstract summary: We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products of the latent positions.
We present a structured mean-field variational inference framework, where the time-dependent properties of the dynamic networks are exploited.
Our proposed methodology can be readily extended to the case where the scales of the latent nodes are learned in a nodewise manner.
- Score: 14.87073454889439
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We consider a latent space model for dynamic networks, where our objective is
to estimate the pairwise inner products of the latent positions. To balance
posterior inference and computational scalability, we present a structured
mean-field variational inference framework, where the time-dependent properties
of the dynamic networks are exploited to facilitate computation and inference.
Additionally, an easy-to-implement block coordinate ascent algorithm is
developed with message-passing type updates in each block, whereas the
complexity per iteration is linear with the number of nodes and time points. To
facilitate learning of the pairwise latent distances, we adopt a Gamma prior
for the transition variance different from the literature. To certify the
optimality, we demonstrate that the variational risk of the proposed
variational inference approach attains the minimax optimal rate under certain
conditions. En route, we derive the minimax lower bound, which might be of
independent interest. To best of our knowledge, this is the first such exercise
for dynamic latent space models. Simulations and real data analysis demonstrate
the efficacy of our methodology and the efficiency of our algorithm. Finally,
our proposed methodology can be readily extended to the case where the scales
of the latent nodes are learned in a nodewise manner.
Related papers
- Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Probabilistic partition of unity networks for high-dimensional
regression problems [1.0227479910430863]
We explore the partition of unity network (PPOU-Net) model in the context of high-dimensional regression problems.
We propose a general framework focusing on adaptive dimensionality reduction.
The PPOU-Nets consistently outperform the baseline fully-connected neural networks of comparable sizes in numerical experiments.
arXiv Detail & Related papers (2022-10-06T06:01:36Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - A Deep Learning approach to Reduced Order Modelling of Parameter
Dependent Partial Differential Equations [0.2148535041822524]
We develop a constructive approach based on Deep Neural Networks for the efficient approximation of the parameter-to-solution map.
In particular, we consider parametrized advection-diffusion PDEs, and we test the methodology in the presence of strong transport fields.
arXiv Detail & Related papers (2021-03-10T17:01:42Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - SODEN: A Scalable Continuous-Time Survival Model through Ordinary
Differential Equation Networks [14.564168076456822]
We propose a flexible model for survival analysis using neural networks along with scalable optimization algorithms.
We demonstrate the effectiveness of the proposed method in comparison to existing state-of-the-art deep learning survival analysis models.
arXiv Detail & Related papers (2020-08-19T19:11:25Z) - Adaptive Discretization for Model-Based Reinforcement Learning [10.21634042036049]
We introduce the technique of adaptive discretization to design an efficient model-based episodic reinforcement learning algorithm.
Our algorithm is based on optimistic one-step value iteration extended to maintain an adaptive discretization of the space.
arXiv Detail & Related papers (2020-07-01T19:36:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.