GFPack++: Improving 2D Irregular Packing by Learning Gradient Field with Attention
- URL: http://arxiv.org/abs/2406.07579v1
- Date: Sun, 9 Jun 2024 06:44:08 GMT
- Title: GFPack++: Improving 2D Irregular Packing by Learning Gradient Field with Attention
- Authors: Tianyang Xue, Lin Lu, Yang Liu, Mingdong Wu, Hao Dong, Yanbin Zhang, Renmin Han, Baoquan Chen,
- Abstract summary: 2D irregular packing is a classic optimization problem with various applications, such as material utilization and texture atlas generation.
Conventional numerical methods suffer from slow convergence and high computational cost.
Existing learning-based methods, such as the score-based diffusion model, also have limitations, such as no rotation support, frequent collisions, and poor adaptability to arbitrary boundaries, and slow inferring.
We propose GFPack++, an attention-based gradient field learning approach that addresses this challenge.
- Score: 29.836816853278886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 2D irregular packing is a classic combinatorial optimization problem with various applications, such as material utilization and texture atlas generation. This NP-hard problem requires efficient algorithms to optimize space utilization. Conventional numerical methods suffer from slow convergence and high computational cost. Existing learning-based methods, such as the score-based diffusion model, also have limitations, such as no rotation support, frequent collisions, and poor adaptability to arbitrary boundaries, and slow inferring. The difficulty of learning from teacher packing is to capture the complex geometric relationships among packing examples, which include the spatial (position, orientation) relationships of objects, their geometric features, and container boundary conditions. Representing these relationships in latent space is challenging. We propose GFPack++, an attention-based gradient field learning approach that addresses this challenge. It consists of two pivotal strategies: \emph{attention-based geometry encoding} for effective feature encoding and \emph{attention-based relation encoding} for learning complex relationships. We investigate the utilization distribution between the teacher and inference data and design a weighting function to prioritize tighter teacher data during training, enhancing learning effectiveness. Our diffusion model supports continuous rotation and outperforms existing methods on various datasets. We achieve higher space utilization over several widely used baselines, one-order faster than the previous diffusion-based method, and promising generalization for arbitrary boundaries. We plan to release our source code and datasets to support further research in this direction.
Related papers
- Geometrically Inspired Kernel Machines for Collaborative Learning Beyond Gradient Descent [36.59087823764832]
This paper develops a novel mathematical framework for collaborative learning by means of geometrically inspired kernel machines.
For classification problems, this approach allows us to learn bounded geometric structures around given data points.
arXiv Detail & Related papers (2024-07-05T08:20:27Z) - Learning Gradient Fields for Scalable and Generalizable Irregular
Packing [28.814796920026172]
The packing problem, also known as cutting or nesting, has diverse applications in logistics, manufacturing, layout design, and atlas generation.
Recent advances in machine learning, particularly reinforcement learning, have shown promise in addressing the packing problem.
In this work, we delve deeper into a novel machine learning-based approach that formulates the packing problem as conditional generative modeling.
arXiv Detail & Related papers (2023-10-18T15:52:55Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Learning based 2D Irregular Shape Packing [29.044043493942013]
2D irregular shape packing is a necessary step to arrange UV patches of a 3D model within a texture atlas for memory-efficient appearance rendering in computer graphics.
We introduce a learning-assisted 2D irregular shape packing method that achieves a high packing quality with minimal requirements from the input.
In order to efficiently deal with large problem instances with hundreds of patches, we train deep neural policies to predict nearly rectangular patch subsets.
arXiv Detail & Related papers (2023-09-19T05:21:52Z) - Learning representations that are closed-form Monge mapping optimal with
application to domain adaptation [24.258758784011572]
Optimal transport (OT) is a powerful tool used to compare and align probability measures following the least effort principle.
Despite its widespread use in machine learning (ML), OT problem still bears its computational burden.
We propose to tackle these challenges using representation learning.
arXiv Detail & Related papers (2023-05-12T14:14:39Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces [7.717214217542406]
We propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams.
This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process.
Our proposed method is directly applicable to various datasets without the need of retraining the model.
arXiv Detail & Related papers (2021-05-25T20:44:28Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Spatial-spectral Hyperspectral Image Classification via Multiple Random
Anchor Graphs Ensemble Learning [88.60285937702304]
This paper proposes a novel spatial-spectral HSI classification method via multiple random anchor graphs ensemble learning (RAGE)
Firstly, the local binary pattern is adopted to extract the more descriptive features on each selected band, which preserves local structures and subtle changes of a region.
Secondly, the adaptive neighbors assignment is introduced in the construction of anchor graph, to reduce the computational complexity.
arXiv Detail & Related papers (2021-03-25T09:31:41Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
Partitioned Edge Learning [73.82875010696849]
Machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models.
This paper focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation.
arXiv Detail & Related papers (2020-03-10T05:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.