Towards Better Laplacian Representation in Reinforcement Learning with
Generalized Graph Drawing
- URL: http://arxiv.org/abs/2107.05545v1
- Date: Mon, 12 Jul 2021 16:14:02 GMT
- Title: Towards Better Laplacian Representation in Reinforcement Learning with
Generalized Graph Drawing
- Authors: Kaixin Wang, Kuangqi Zhou, Qixin Zhang, Jie Shao, Bryan Hooi, Jiashi
Feng
- Abstract summary: The Laplacian representation provides succinct and informative representation for states.
Recent works propose to minimize a spectral graph drawing objective, which however has infinitely many global minimizers other than the eigenvectors.
We show that our learned Laplacian representations lead to more exploratory options and better reward shaping.
- Score: 88.22538267731733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Laplacian representation recently gains increasing attention for
reinforcement learning as it provides succinct and informative representation
for states, by taking the eigenvectors of the Laplacian matrix of the
state-transition graph as state embeddings. Such representation captures the
geometry of the underlying state space and is beneficial to RL tasks such as
option discovery and reward shaping. To approximate the Laplacian
representation in large (or even continuous) state spaces, recent works propose
to minimize a spectral graph drawing objective, which however has infinitely
many global minimizers other than the eigenvectors. As a result, their learned
Laplacian representation may differ from the ground truth. To solve this
problem, we reformulate the graph drawing objective into a generalized form and
derive a new learning objective, which is proved to have eigenvectors as its
unique global minimizer. It enables learning high-quality Laplacian
representations that faithfully approximate the ground truth. We validate this
via comprehensive experiments on a set of gridworld and continuous control
environments. Moreover, we show that our learned Laplacian representations lead
to more exploratory options and better reward shaping.
Related papers
- Towards Stable, Globally Expressive Graph Representations with Laplacian Eigenvectors [29.055130767451036]
We propose a novel method exploiting Laplacian eigenvectors to generate stable and globally expressive graph representations.
Our method deals with numerically close eigenvalues in a smooth fashion, ensuring its better robustness against perturbations.
arXiv Detail & Related papers (2024-10-13T06:02:25Z) - Generalization from Starvation: Hints of Universality in LLM Knowledge Graph Learning [8.025148264640967]
We investigate how neural networks represent knowledge during graph learning.
We find hints of universality, where equivalent representations are learned across a range of model sizes.
We show that these attractor representations optimize generalization to unseen examples.
arXiv Detail & Related papers (2024-10-10T16:23:42Z) - Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Graph Generation via Spectral Diffusion [51.60814773299899]
We present GRASP, a novel graph generative model based on 1) the spectral decomposition of the graph Laplacian matrix and 2) a diffusion process.
Specifically, we propose to use a denoising model to sample eigenvectors and eigenvalues from which we can reconstruct the graph Laplacian and adjacency matrix.
Our permutation invariant model can also handle node features by concatenating them to the eigenvectors of each node.
arXiv Detail & Related papers (2024-02-29T09:26:46Z) - Proper Laplacian Representation Learning [15.508199129490068]
We introduce a theoretically sound objective and corresponding optimization algorithm for approximating the Laplacian representation.
We show that those results translate empirically into robust learning across multiple environments.
arXiv Detail & Related papers (2023-10-16T21:14:50Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - A Self-supervised Mixed-curvature Graph Neural Network [76.3790248465522]
We present a novel Self-supervised Mixed-curvature Graph Neural Network (SelfMGNN)
We show that SelfMGNN captures the complicated graph structures in reality and outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-10T08:56:55Z) - Directed Graph Embeddings in Pseudo-Riemannian Manifolds [0.0]
We show that general directed graphs can be effectively represented by an embedding model that combines three components.
We demonstrate the representational capabilities of this method by applying it to the task of link prediction.
arXiv Detail & Related papers (2021-06-16T10:31:37Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.