A Deep Autoregressive Model for Dynamic Combinatorial Complexes
- URL: http://arxiv.org/abs/2503.01999v1
- Date: Mon, 03 Mar 2025 19:15:40 GMT
- Title: A Deep Autoregressive Model for Dynamic Combinatorial Complexes
- Authors: Ata Tuna,
- Abstract summary: We introduce DAMCC (Deep Autoregressive Model for Dynamic Combinatorial Complexes), the first deep learning model designed to generate dynamic complexes (CCs)<n>Unlike traditional graph-based models, CCs capture higher-order interactions, making them ideal for representing social networks, biological systems, and evolving infrastructures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DAMCC (Deep Autoregressive Model for Dynamic Combinatorial Complexes), the first deep learning model designed to generate dynamic combinatorial complexes (CCs). Unlike traditional graph-based models, CCs capture higher-order interactions, making them ideal for representing social networks, biological systems, and evolving infrastructures. While existing models primarily focus on static graphs, DAMCC addresses the challenge of modeling temporal dynamics and higher-order structures in dynamic networks. DAMCC employs an autoregressive framework to predict the evolution of CCs over time. Through comprehensive experiments on real-world and synthetic datasets, we demonstrate its ability to capture both temporal and higher-order dependencies. As the first model of its kind, DAMCC lays the foundation for future advancements in dynamic combinatorial complex modeling, with opportunities for improved scalability and efficiency on larger networks.
Related papers
- Diffusion Dynamics Models with Generative State Estimation for Cloth Manipulation [39.72581795761555]
We propose a diffusion-based generative approach for both perception and dynamics modeling.
We reconstruct the full cloth state from sparse RGB-D observations conditioned on a canonical cloth mesh and dynamics modeling.
Our framework successfully executes cloth folding on a real robotic system.
arXiv Detail & Related papers (2025-03-15T05:34:26Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.<n>Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Pre-Trained Video Generative Models as World Simulators [59.546627730477454]
We propose Dynamic World Simulation (DWS) to transform pre-trained video generative models into controllable world simulators.<n>To achieve precise alignment between conditioned actions and generated visual changes, we introduce a lightweight, universal action-conditioned module.<n> Experiments demonstrate that DWS can be versatilely applied to both diffusion and autoregressive transformer models.
arXiv Detail & Related papers (2025-02-10T14:49:09Z) - Multi-Head Self-Attending Neural Tucker Factorization [5.734615417239977]
We introduce a neural network-based tensor factorization approach tailored for learning representations of high-dimensional and incomplete (HDI) tensors.<n>The proposed MSNTucF model demonstrates superior performance compared to state-of-the-art benchmark models in estimating missing observations.
arXiv Detail & Related papers (2025-01-16T13:04:15Z) - Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models [9.318262213262866]
We introduce a novel framework for learning semi-structured dynamics models for contact-rich systems.
We make accurate long-horizon predictions with substantially less data than prior methods.
We validate our approach on a real-world Unitree Go1 quadruped robot.
arXiv Detail & Related papers (2024-10-11T18:11:21Z) - DynInt: Dynamic Interaction Modeling for Large-scale Click-Through Rate
Prediction [0.0]
Learning feature interactions is the key to success for the large-scale CTR prediction in Ads ranking and recommender systems.
Deep neural network-based models are widely adopted for modeling such problems.
We propose a new model: DynInt, which learns higher-order interactions to be dynamic and data-dependent.
arXiv Detail & Related papers (2023-01-03T13:01:30Z) - DAMNETS: A Deep Autoregressive Model for Generating Markovian Network
Time Series [6.834250594353335]
Generative models for network time series (also known as dynamic graphs) have tremendous potential in fields such as epidemiology, biology and economics.
Here we introduce DAMNETS, a scalable deep generative model for network time series.
arXiv Detail & Related papers (2022-03-28T18:14:04Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z) - Automated and Formal Synthesis of Neural Barrier Certificates for
Dynamical Models [70.70479436076238]
We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC)
The approach is underpinned by an inductive framework, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate's validity or generates counter-examples.
The outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine.
arXiv Detail & Related papers (2020-07-07T07:39:42Z) - Relational State-Space Model for Stochastic Multi-Object Systems [24.234120525358456]
This paper introduces the relational state-space model (R-SSM), a sequential hierarchical latent variable model.
R-SSM makes use of graph neural networks (GNNs) to simulate the joint state transitions of multiple correlated objects.
The utility of R-SSM is empirically evaluated on synthetic and real time-series datasets.
arXiv Detail & Related papers (2020-01-13T03:45:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.