Fast and Flexible Temporal Point Processes with Triangular Maps
- URL: http://arxiv.org/abs/2006.12631v2
- Date: Tue, 10 Nov 2020 16:44:49 GMT
- Title: Fast and Flexible Temporal Point Processes with Triangular Maps
- Authors: Oleksandr Shchur, Nicholas Gao, Marin Bilo\v{s}, Stephan G\"unnemann
- Abstract summary: We propose a new class of non-recurrent TPP models, where both sampling and likelihood can be done in parallel.
We demonstrate the advantages of the proposed framework on synthetic and real-world datasets.
- Score: 24.099464487795274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal point process (TPP) models combined with recurrent neural networks
provide a powerful framework for modeling continuous-time event data. While
such models are flexible, they are inherently sequential and therefore cannot
benefit from the parallelism of modern hardware. By exploiting the recent
developments in the field of normalizing flows, we design TriTPP -- a new class
of non-recurrent TPP models, where both sampling and likelihood computation can
be done in parallel. TriTPP matches the flexibility of RNN-based methods but
permits orders of magnitude faster sampling. This enables us to use the new
model for variational inference in continuous-time discrete-state systems. We
demonstrate the advantages of the proposed framework on synthetic and
real-world datasets.
Related papers
- Not-So-Optimal Transport Flows for 3D Point Cloud Generation [58.164908756416615]
Learning generative models of 3D point clouds is one of the fundamental problems in 3D generative learning.
In this paper, we analyze the recently proposed equivariant OT flows that learn permutation invariant generative models for point-based molecular data.
We show that our proposed model outperforms prior diffusion- and flow-based approaches on a wide range of unconditional generation and shape completion.
arXiv Detail & Related papers (2025-02-18T02:37:34Z) - A Temporal Linear Network for Time Series Forecasting [0.0]
We introduce the Temporal Linear Net (TLN), that extends the capabilities of linear models while maintaining interpretability and computational efficiency.
Our approach is a variant of TSMixer that maintains strict linearity throughout its architecture.
A key innovation of TLN is its ability to compute an equivalent linear model, offering a level of interpretability not found in more complex architectures such as TSMixer.
arXiv Detail & Related papers (2024-10-28T18:51:19Z) - ALERT-Transformer: Bridging Asynchronous and Synchronous Machine Learning for Real-Time Event-based Spatio-Temporal Data [8.660721666999718]
We propose a hybrid pipeline composed of asynchronous sensing and synchronous processing.
We achieve performances state-of-the-art with a lower latency than competitors.
arXiv Detail & Related papers (2024-02-02T13:17:19Z) - Cumulative Distribution Function based General Temporal Point Processes [49.758080415846884]
CuFun model represents a novel approach to TPPs that revolves around the Cumulative Distribution Function (CDF)
Our approach addresses several critical issues inherent in traditional TPP modeling.
Our contributions encompass the introduction of a pioneering CDF-based TPP model, the development of a methodology for incorporating past event information into future event prediction.
arXiv Detail & Related papers (2024-02-01T07:21:30Z) - Intensity-free Convolutional Temporal Point Process: Incorporating Local
and Global Event Contexts [30.534921874640585]
We propose a novel TPP modelling approach that combines local and global contexts by integrating a continuous-time convolutional event encoder with an RNN.
The presented framework is flexible and scalable to handle large datasets with long sequences and complex latent patterns.
To our best knowledge, this is the first work that applies convolutional neural networks to TPP modelling.
arXiv Detail & Related papers (2023-06-24T22:57:40Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Exemplar-bsaed Pattern Synthesis with Implicit Periodic Field Network [21.432274505770394]
We propose an exemplar-based visual pattern synthesis framework that aims to model inner statistics of visual patterns and generate new, versatile patterns.
An implicit network based on generative adversarial network (GAN) and periodic encoding, thus calling our network the Implicit Periodic Network (IPFN)
arXiv Detail & Related papers (2022-04-04T17:36:16Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale
Language Models [60.23234205219347]
TeraPipe is a high-performance token-level pipeline parallel algorithm for synchronous model-parallel training of Transformer-based language models.
We show that TeraPipe can speed up the training by 5.0x for the largest GPT-3 model with 175 billion parameters on an AWS cluster.
arXiv Detail & Related papers (2021-02-16T07:34:32Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - Multi-Temporal Convolutions for Human Action Recognition in Videos [83.43682368129072]
We present a novel temporal-temporal convolution block that is capable of extracting at multiple resolutions.
The proposed blocks are lightweight and can be integrated into any 3D-CNN architecture.
arXiv Detail & Related papers (2020-11-08T10:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.