Graph Transformer GANs for Graph-Constrained House Generation
- URL: http://arxiv.org/abs/2303.08225v1
- Date: Tue, 14 Mar 2023 20:35:45 GMT
- Title: Graph Transformer GANs for Graph-Constrained House Generation
- Authors: Hao Tang, Zhenyu Zhang, Humphrey Shi, Bo Li, Ling Shao, Nicu Sebe,
Radu Timofte, Luc Van Gool
- Abstract summary: We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
- Score: 223.739067413952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel graph Transformer generative adversarial network (GTGAN)
to learn effective graph node relations in an end-to-end fashion for the
challenging graph-constrained house generation task. The proposed
graph-Transformer-based generator includes a novel graph Transformer encoder
that combines graph convolutions and self-attentions in a Transformer to model
both local and global interactions across connected and non-connected graph
nodes. Specifically, the proposed connected node attention (CNA) and
non-connected node attention (NNA) aim to capture the global relations across
connected nodes and non-connected nodes in the input graph, respectively. The
proposed graph modeling block (GMB) aims to exploit local vertex interactions
based on a house layout topology. Moreover, we propose a new node
classification-based discriminator to preserve the high-level semantic and
discriminative node features for different house components. Finally, we
propose a novel graph-based cycle-consistency loss that aims at maintaining the
relative spatial relationships between ground truth and predicted graphs.
Experiments on two challenging graph-constrained house generation tasks (i.e.,
house layout and roof generation) with two public datasets demonstrate the
effectiveness of GTGAN in terms of objective quantitative scores and subjective
visual realism. New state-of-the-art results are established by large margins
on both tasks.
Related papers
- Improving Graph Neural Networks by Learning Continuous Edge Directions [0.0]
Graph Neural Networks (GNNs) traditionally employ a message-passing mechanism that resembles diffusion over undirected graphs.
Our key insight is to assign fuzzy edge directions to the edges of a graph so that features can preferentially flow in one direction between nodes.
We propose a general framework, called Continuous Edge Direction (CoED) GNN, for learning on graphs with fuzzy edges.
arXiv Detail & Related papers (2024-10-18T01:34:35Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - SignGT: Signed Attention-based Graph Transformer for Graph
Representation Learning [15.248591535696146]
We propose a Signed Attention-based Graph Transformer (SignGT) to adaptively capture various frequency information from the graphs.
Specifically, SignGT develops a new signed self-attention mechanism (SignSA) that produces signed attention values according to the semantic relevance of node pairs.
arXiv Detail & Related papers (2023-10-17T06:42:11Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Spectral Embedding of Graph Networks [76.27138343125985]
We introduce an unsupervised graph embedding that trades off local node similarity and connectivity, and global structure.
The embedding is based on a generalized graph Laplacian, whose eigenvectors compactly capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2020-09-30T04:59:10Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.