Learning Deformable Body Interactions With Adaptive Spatial Tokenization
- URL: http://arxiv.org/abs/2507.13707v1
- Date: Fri, 18 Jul 2025 07:27:35 GMT
- Title: Learning Deformable Body Interactions With Adaptive Spatial Tokenization
- Authors: Hao Wang, Yu Liu, Daniel Biggs, Haoru Wang, Jiandong Yu, Ping Huang,
- Abstract summary: Simulating interactions between deformable bodies is vital in fields like material science, mechanical design, and robotics.<n>To model interactions between objects, pairwise global edges have to be created dynamically.<n>We propose an Adaptive Spatial Tokenization (AST) method for efficient representation of physical states.
- Score: 7.74274240680049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulating interactions between deformable bodies is vital in fields like material science, mechanical design, and robotics. While learning-based methods with Graph Neural Networks (GNNs) are effective at solving complex physical systems, they encounter scalability issues when modeling deformable body interactions. To model interactions between objects, pairwise global edges have to be created dynamically, which is computationally intensive and impractical for large-scale meshes. To overcome these challenges, drawing on insights from geometric representations, we propose an Adaptive Spatial Tokenization (AST) method for efficient representation of physical states. By dividing the simulation space into a grid of cells and mapping unstructured meshes onto this structured grid, our approach naturally groups adjacent mesh nodes. We then apply a cross-attention module to map the sparse cells into a compact, fixed-length embedding, serving as tokens for the entire physical state. Self-attention modules are employed to predict the next state over these tokens in latent space. This framework leverages the efficiency of tokenization and the expressive power of attention mechanisms to achieve accurate and scalable simulation results. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches in modeling deformable body interactions. Notably, it remains effective on large-scale simulations with meshes exceeding 100,000 nodes, where existing methods are hindered by computational limitations. Additionally, we contribute a novel large-scale dataset encompassing a wide range of deformable body interactions to support future research in this area.
Related papers
- Latent Representation Learning of Multi-scale Thermophysics: Application to Dynamics in Shocked Porous Energetic Material [0.05057680722486273]
We propose an alternative meta-learning approach motivated by the idea of tokenization in natural language processing.<n>We show that one can learn a reduced representation of the micro-scale physics to accelerate the meso-scale learning process.<n>The proposed approach accelerates the development of closure models by leveraging inexpensive micro-scale simulations and fast training over a small meso-scale dataset.
arXiv Detail & Related papers (2025-06-15T23:28:33Z) - Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction [55.914891182214475]
We introduce neural network reprogrammability as a unifying framework for model adaptation.<n>We present a taxonomy that categorizes such information manipulation approaches across four key dimensions.<n>We also analyze remaining technical challenges and ethical considerations.
arXiv Detail & Related papers (2025-06-05T05:42:27Z) - MIXPINN: Mixed-Material Simulations by Physics-Informed Neural Network [1.275845610262865]
Traditional Finite Element Method (FEM)-based simulations are computationally expensive and impractical for real-time scenarios.<n>We introduce MIXPINN, a physics-informed Graph Neural Network (GNN) framework for mixed-material simulations.<n>By leveraging a graph-based representation of biomechanical structures, MIXPINN learns high-fidelity deformations from FEM-generated data and achieves real-time inference with sub-millimeter accuracy.
arXiv Detail & Related papers (2025-03-17T12:48:29Z) - GausSim: Foreseeing Reality by Gaussian Simulator for Elastic Objects [55.02281855589641]
GausSim is a novel neural network-based simulator designed to capture the dynamic behaviors of real-world elastic objects represented through Gaussian kernels.<n>We leverage continuum mechanics and treat each kernel as a Center of Mass System (CMS) that represents continuous piece of matter.<n>In addition, GausSim incorporates explicit physics constraints, such as mass and momentum conservation, ensuring interpretable results and robust, physically plausible simulations.
arXiv Detail & Related papers (2024-12-23T18:58:17Z) - Integrating Physics and Topology in Neural Networks for Learning Rigid Body Dynamics [6.675805308519987]
We introduce a novel framework for modeling rigid body dynamics and learning collision interactions.<n>We propose a physics-informed message-passing neural architecture, embedding physical laws directly in the model.<n>This work addresses the challenge of multi-entity dynamic interactions, with applications spanning diverse scientific and engineering domains.
arXiv Detail & Related papers (2024-11-18T11:03:15Z) - Physics-Encoded Graph Neural Networks for Deformation Prediction under
Contact [87.69278096528156]
In robotics, it's crucial to understand object deformation during tactile interactions.
We introduce a method using Physics-Encoded Graph Neural Networks (GNNs) for such predictions.
We've made our code and dataset public to advance research in robotic simulation and grasping.
arXiv Detail & Related papers (2024-02-05T19:21:52Z) - Learning rigid dynamics with face interaction graph networks [11.029321427540829]
We introduce the Face Interaction Graph Network (FIGNet) which computes interactions between mesh faces, rather than nodes.
FIGNet is around 4x more accurate in simulating complex shape interactions, while also 8x more computationally efficient on sparse, rigid meshes.
It can learn frictional dynamics directly from real-world data, and can be more accurate than analytical solvers given modest amounts of training data.
arXiv Detail & Related papers (2022-12-07T11:22:42Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Convolutions for Spatial Interaction Modeling [9.408751013132624]
We consider the problem of spatial interaction modeling in the context of predicting the motion of actors around autonomous vehicles.
We revisit convolutions and show that they can demonstrate comparable performance to graph networks in modeling spatial interactions with lower latency.
arXiv Detail & Related papers (2021-04-15T00:41:30Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z) - Learning Physical Constraints with Neural Projections [16.09436906471513]
We propose a new family of neural networks to predict the behaviors of physical systems by learning their underpinning constraints.
A neural projection operator lies at the heart of our approach, composed of a lightweight network with an embedded recursion architecture.
We demonstrated the efficacy of our approach by learning a set of challenging physical systems all in a unified and simple fashion.
arXiv Detail & Related papers (2020-06-23T04:19:04Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.