TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network
- URL: http://arxiv.org/abs/2007.02278v1
- Date: Sun, 5 Jul 2020 10:06:06 GMT
- Title: TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network
- Authors: Hao Xu and Ka Hei Hui and Chi-Wing Fu and Hao Zhang
- Abstract summary: We introduce the first neural optimization framework to solve a classical instance of the tiling problem.
Namely, we seek a non-periodic tiling of an arbitrary 2D shape using one or more types of tiles.
We build a graph convolutional neural network, coined TilinGNN, to progressively propagate and aggregate features over graph edges.
- Score: 53.64132784328536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the first neural optimization framework to solve a classical
instance of the tiling problem. Namely, we seek a non-periodic tiling of an
arbitrary 2D shape using one or more types of tiles: the tiles maximally fill
the shape's interior without overlaps or holes. To start, we reformulate tiling
as a graph problem by modeling candidate tile locations in the target shape as
graph nodes and connectivity between tile locations as edges. Further, we build
a graph convolutional neural network, coined TilinGNN, to progressively
propagate and aggregate features over graph edges and predict tile placements.
TilinGNN is trained by maximizing the tiling coverage on target shapes, while
avoiding overlaps and holes between the tiles. Importantly, our network is
self-supervised, as we articulate these criteria as loss terms defined on the
network outputs, without the need of ground-truth tiling solutions. After
training, the runtime of TilinGNN is roughly linear to the number of candidate
tile locations, significantly outperforming traditional combinatorial search.
We conducted various experiments on a variety of shapes to showcase the speed
and versatility of TilinGNN. We also present comparisons to alternative methods
and manual solutions, robustness analysis, and ablation studies to demonstrate
the quality of our approach.
Related papers
- Deep Loss Convexification for Learning Iterative Models [11.36644967267829]
Iterative methods such as iterative closest point (ICP) for point cloud registration often suffer from bad local optimality.
We propose learning to form a convex landscape around each ground truth.
arXiv Detail & Related papers (2024-11-16T01:13:04Z) - A GREAT Architecture for Edge-Based Graph Problems Like TSP [8.922883855120416]
Graph neural networks (GNNs) are not well suited to operate on dense graphs, such as in routing problems.
We propose a novel GNN-related edge-based neural model called Graph Edge Attention Network (GREAT)
We show GREAT can produce sparse graphs while keeping most of the optimal edges.
arXiv Detail & Related papers (2024-08-29T17:07:43Z) - CoRe-GD: A Hierarchical Framework for Scalable Graph Visualization with
GNNs [20.706469085872516]
We introduce a scalable Graph Neural Network (GNN) based Graph Drawing framework with sub-quadratic that can learn to optimize stress.
Inspired by classical stress optimization techniques and force-directed layout algorithms, we create a coarsening hierarchy for the input graph.
To enhance information propagation within the network, we propose a novel positional rewiring technique.
arXiv Detail & Related papers (2024-02-09T10:50:45Z) - Learning Self-Prior for Mesh Inpainting Using Self-Supervised Graph Convolutional Networks [4.424836140281846]
We present a self-prior-based mesh inpainting framework that requires only an incomplete mesh as input.
Our method maintains the polygonal mesh format throughout the inpainting process.
We demonstrate that our method outperforms traditional dataset-independent approaches.
arXiv Detail & Related papers (2023-05-01T02:51:38Z) - Bring Your Own View: Graph Neural Networks for Link Prediction with
Personalized Subgraph Selection [57.34881616131377]
We introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges.
PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently.
We suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs.
arXiv Detail & Related papers (2022-12-23T17:30:19Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.