Towards Better Laplacian Representation in Reinforcement Learning with
Generalized Graph Drawing
- URL: http://arxiv.org/abs/2107.05545v1
- Date: Mon, 12 Jul 2021 16:14:02 GMT
- Title: Towards Better Laplacian Representation in Reinforcement Learning with
Generalized Graph Drawing
- Authors: Kaixin Wang, Kuangqi Zhou, Qixin Zhang, Jie Shao, Bryan Hooi, Jiashi
Feng
- Abstract summary: The Laplacian representation provides succinct and informative representation for states.
Recent works propose to minimize a spectral graph drawing objective, which however has infinitely many global minimizers other than the eigenvectors.
We show that our learned Laplacian representations lead to more exploratory options and better reward shaping.
- Score: 88.22538267731733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Laplacian representation recently gains increasing attention for
reinforcement learning as it provides succinct and informative representation
for states, by taking the eigenvectors of the Laplacian matrix of the
state-transition graph as state embeddings. Such representation captures the
geometry of the underlying state space and is beneficial to RL tasks such as
option discovery and reward shaping. To approximate the Laplacian
representation in large (or even continuous) state spaces, recent works propose
to minimize a spectral graph drawing objective, which however has infinitely
many global minimizers other than the eigenvectors. As a result, their learned
Laplacian representation may differ from the ground truth. To solve this
problem, we reformulate the graph drawing objective into a generalized form and
derive a new learning objective, which is proved to have eigenvectors as its
unique global minimizer. It enables learning high-quality Laplacian
representations that faithfully approximate the ground truth. We validate this
via comprehensive experiments on a set of gridworld and continuous control
environments. Moreover, we show that our learned Laplacian representations lead
to more exploratory options and better reward shaping.
Related papers
- Disentangled Hyperbolic Representation Learning for Heterogeneous Graphs [29.065531121422204]
We propose $textDis-H2textGCN$, a Disentangled Hyperbolic Heterogeneous Graph Convolutional Network.
We evaluate our proposed $textDis-H2textGCN$ on five real-world heterogeneous graph datasets.
arXiv Detail & Related papers (2024-06-14T18:50:47Z) - Graph Generation via Spectral Diffusion [51.60814773299899]
We present GRASP, a novel graph generative model based on 1) the spectral decomposition of the graph Laplacian matrix and 2) a diffusion process.
Specifically, we propose to use a denoising model to sample eigenvectors and eigenvalues from which we can reconstruct the graph Laplacian and adjacency matrix.
Our permutation invariant model can also handle node features by concatenating them to the eigenvectors of each node.
arXiv Detail & Related papers (2024-02-29T09:26:46Z) - Proper Laplacian Representation Learning [15.508199129490068]
We introduce a theoretically sound objective and corresponding optimization algorithm for approximating the Laplacian representation.
We show that those results translate empirically into robust learning across multiple environments.
arXiv Detail & Related papers (2023-10-16T21:14:50Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - A Self-supervised Mixed-curvature Graph Neural Network [76.3790248465522]
We present a novel Self-supervised Mixed-curvature Graph Neural Network (SelfMGNN)
We show that SelfMGNN captures the complicated graph structures in reality and outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-10T08:56:55Z) - Directed Graph Embeddings in Pseudo-Riemannian Manifolds [0.0]
We show that general directed graphs can be effectively represented by an embedding model that combines three components.
We demonstrate the representational capabilities of this method by applying it to the task of link prediction.
arXiv Detail & Related papers (2021-06-16T10:31:37Z) - Graph Convolution with Low-rank Learnable Local Filters [32.00396411583352]
This paper introduces a new type of graph convolution with learnable low-rank local filters.
It is provably more expressive than previous spectral graph convolution methods.
The representation against input graph data is theoretically proved, making use of the graph filter locality and the local graph regularization.
arXiv Detail & Related papers (2020-08-04T20:34:59Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.