Structure learning for CTBN's via penalized maximum likelihood methods
- URL: http://arxiv.org/abs/2006.07648v1
- Date: Sat, 13 Jun 2020 14:28:19 GMT
- Title: Structure learning for CTBN's via penalized maximum likelihood methods
- Authors: Maryia Shpak, B{\l}a\.zej Miasojedow, Wojciech Rejchel
- Abstract summary: We study the structure learning problem, which is a more challenging task and the existing research on this topic is limited.
We prove that our algorithm, under mild regularity conditions, recognizes the dependence structure of the graph with high probability.
- Score: 2.997206383342421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The continuous-time Bayesian networks (CTBNs) represent a class of stochastic
processes, which can be used to model complex phenomena, for instance, they can
describe interactions occurring in living processes, in social science models
or in medicine. The literature on this topic is usually focused on the case
when the dependence structure of a system is known and we are to determine
conditional transition intensities (parameters of the network). In the paper,
we study the structure learning problem, which is a more challenging task and
the existing research on this topic is limited. The approach, which we propose,
is based on a penalized likelihood method. We prove that our algorithm, under
mild regularity conditions, recognizes the dependence structure of the graph
with high probability. We also investigate the properties of the procedure in
numerical studies to demonstrate its effectiveness.
Related papers
- Temporal Graph Memory Networks For Knowledge Tracing [0.40964539027092906]
We propose a novel method that jointly models the relational and temporal dynamics of the knowledge state using a deep temporal graph memory network.
We also propose a generic technique for representing a student's forgetting behavior using temporal decay constraints on the graph memory module.
arXiv Detail & Related papers (2024-09-23T07:47:02Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - On the Role of Information Structure in Reinforcement Learning for Partially-Observable Sequential Teams and Games [55.2480439325792]
In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other.
By contrast, real-world sequential decision-making problems typically involve a complex and time-varying interdependence of system variables.
We formalize a novel reinforcement learning model which explicitly represents the information structure.
arXiv Detail & Related papers (2024-03-01T21:28:19Z) - Neural Structure Learning with Stochastic Differential Equations [9.076396370870423]
We introduce a novel structure learning method, SCOTCH, which combines neural differential equations with variational inference to infer a posterior distribution over possible structures.
This continuous-time approach can naturally handle both learning from and predicting observations at arbitrary time points.
arXiv Detail & Related papers (2023-11-06T17:58:47Z) - Structure Learning and Parameter Estimation for Graphical Models via
Penalized Maximum Likelihood Methods [0.0]
In the thesis, we consider two different types of PGMs: Bayesian networks (BNs) which are static, and continuous time Bayesian networks which, as the name suggests, have a temporal component.
We are interested in recovering their true structure, which is the first step in learning any PGM.
arXiv Detail & Related papers (2023-01-30T20:26:13Z) - Provable Reinforcement Learning with a Short-Term Memory [68.00677878812908]
We study a new subclass of POMDPs, whose latent states can be decoded by the most recent history of a short length $m$.
In particular, in the rich-observation setting, we develop new algorithms using a novel "moment matching" approach with a sample complexity that scales exponentially.
Our results show that a short-term memory suffices for reinforcement learning in these environments.
arXiv Detail & Related papers (2022-02-08T16:39:57Z) - A survey of Bayesian Network structure learning [8.411014222942168]
This paper provides a review of 61 algorithms proposed for learning BN structure from data.
The basic approach of each algorithm is described in consistent terms, and the similarities and differences between them highlighted.
Approaches for dealing with data noise in real-world datasets and incorporating expert knowledge into the learning process are also covered.
arXiv Detail & Related papers (2021-09-23T14:54:00Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Latent Network Structure Learning from High Dimensional Multivariate
Point Processes [5.079425170410857]
We propose a new class of nonstationary Hawkes processes to characterize the complex processes underlying the observed data.
We estimate the latent network structure using an efficient sparse least squares estimation approach.
We demonstrate the efficacy of our proposed method through simulation studies and an application to a neuron spike train data set.
arXiv Detail & Related papers (2020-04-07T17:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.