DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion
- URL: http://arxiv.org/abs/2301.09474v4
- Date: Sun, 28 May 2023 09:19:49 GMT
- Title: DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion
- Authors: Qitian Wu, Chenxiao Yang, Wentao Zhao, Yixuan He, David Wipf, Junchi
Yan
- Abstract summary: We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
- Score: 66.21290235237808
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world data generation often involves complex inter-dependencies among
instances, violating the IID-data hypothesis of standard learning paradigms and
posing a challenge for uncovering the geometric structures for learning desired
instance representations. To this end, we introduce an energy constrained
diffusion model which encodes a batch of instances from a dataset into
evolutionary states that progressively incorporate other instances' information
by their interactions. The diffusion process is constrained by descent criteria
w.r.t.~a principled energy function that characterizes the global consistency
of instance representations over latent structures. We provide rigorous theory
that implies closed-form optimal estimates for the pairwise diffusion strength
among arbitrary instance pairs, which gives rise to a new class of neural
encoders, dubbed as DIFFormer (diffusion-based Transformers), with two
instantiations: a simple version with linear complexity for prohibitive
instance numbers, and an advanced version for learning complex structures.
Experiments highlight the wide applicability of our model as a general-purpose
encoder backbone with superior performance in various tasks, such as node
classification on large graphs, semi-supervised image/text classification, and
spatial-temporal dynamics prediction.
Related papers
- Sampling Foundational Transformer: A Theoretical Perspective [12.7600763629179]
We propose Foundational Sampling Transformer (SFT) that can work on multiple data modalities.
SFT has achieved competitive results on many benchmarks, while being faster in inference, compared to other very specialized models.
arXiv Detail & Related papers (2024-08-11T16:53:09Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - PAC-Chernoff Bounds: Understanding Generalization in the Interpolation Regime [6.645111950779666]
This paper introduces a distribution-dependent PAC-Chernoff bound that exhibits perfect tightness for interpolators.
We present a unified theoretical framework revealing why certain interpolators show an exceptional generalization, while others falter.
arXiv Detail & Related papers (2023-06-19T14:07:10Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.