DeepFracture: A Generative Approach for Predicting Brittle Fractures
- URL: http://arxiv.org/abs/2310.13344v1
- Date: Fri, 20 Oct 2023 08:15:13 GMT
- Title: DeepFracture: A Generative Approach for Predicting Brittle Fractures
- Authors: Yuhang Huang, Takashi Kanai
- Abstract summary: This paper introduces a novel learning-based approach for seamlessly merging realistic brittle fracture animations with rigid-body simulations.
Our method utilizes BEM brittle fracture simulations to create fractured patterns and collision conditions for a given shape.
Our experimental results demonstrate that our approach can generate significantly more detailed brittle fractures compared to existing techniques.
- Score: 2.7669937245634757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of brittle fracture animation, generating realistic destruction
animations with physics simulation techniques can be computationally expensive.
Although methods using Voronoi diagrams or pre-fractured patterns work for
real-time applications, they often lack realism in portraying brittle
fractures. This paper introduces a novel learning-based approach for seamlessly
merging realistic brittle fracture animations with rigid-body simulations. Our
method utilizes BEM brittle fracture simulations to create fractured patterns
and collision conditions for a given shape, which serve as training data for
the learning process. To effectively integrate collision conditions and
fractured shapes into a deep learning framework, we introduce the concept of
latent impulse representation and geometrically-segmented signed distance
function (GS-SDF). The latent impulse representation serves as input, capturing
information about impact forces on the shape's surface. Simultaneously, a
GS-SDF is used as the output representation of the fractured shape. To address
the challenge of optimizing multiple fractured pattern targets with a single
latent code, we propose an eight-dimensional latent space based on a normal
distribution code within our latent impulse representation design. This
adaptation effectively transforms our neural network into a generative one. Our
experimental results demonstrate that our approach can generate significantly
more detailed brittle fractures compared to existing techniques, all while
maintaining commendable computational efficiency during run-time.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - HOSSnet: an Efficient Physics-Guided Neural Network for Simulating Crack
Propagation [4.594946929826274]
We propose a new data-driven methodology to reconstruct the crack fracture accurately in the spatial and temporal fields.
We leverage physical constraints to regularize the fracture propagation in the long-term reconstruction.
Our proposed method can reconstruct high-fidelity fracture data over space and time in terms of pixel-wise reconstruction error and structural similarity.
arXiv Detail & Related papers (2023-06-14T23:39:37Z) - Near-realtime Facial Animation by Deep 3D Simulation Super-Resolution [7.14576106770047]
We present a neural network-based simulation framework that can efficiently and realistically enhance a facial performance produced by a low-cost, realtime physics-based simulation.
We use face animation as an exemplar of such a simulation domain, where creating this semantic congruence is achieved by simply dialing in the same muscle actuation controls and skeletal pose in the two simulators.
Our proposed neural network super-resolution framework generalizes from this training set to unseen expressions, compensates for modeling discrepancies between the two simulations due to limited resolution or cost-cutting approximations in the real-time variant, and does not require any semantic descriptors or parameters to
arXiv Detail & Related papers (2023-05-05T00:09:24Z) - Generating artificial digital image correlation data using
physics-guided adversarial networks [2.07180164747172]
Digital image correlation (DIC) has become a valuable tool to monitor and evaluate mechanical experiments of cracked specimen.
We present a method to directly generate large amounts of artificial displacement data of cracked specimen resembling real interpolated DIC displacements.
arXiv Detail & Related papers (2023-03-28T12:52:40Z) - Real-time simulation of viscoelastic tissue behavior with physics-guided
deep learning [0.8250374560598492]
We propose a deep learning method for predicting displacement fields of soft tissues with viselastic properties.
The proposed method achieves a better accuracy over the conventional CNN models.
It is hoped that the present investigation will help in filling the gap in applying deep learning in virtual reality.
arXiv Detail & Related papers (2023-01-11T18:17:10Z) - DeepMend: Learning Occupancy Functions to Represent Shape for Repair [0.6087960723103347]
DeepMend is a novel approach to reconstruct restorations to fractured shapes using learned occupancy functions.
We represent the occupancy of a fractured shape as the conjunction of the occupancy of an underlying complete shape and the fracture surface.
We show results with simulated fractures on synthetic and real-world scanned objects, and with scanned real fractured mugs.
arXiv Detail & Related papers (2022-10-11T18:42:20Z) - RISP: Rendering-Invariant State Predictor with Differentiable Simulation
and Rendering for Cross-Domain Parameter Estimation [110.4255414234771]
Existing solutions require massive training data or lack generalizability to unknown rendering configurations.
We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem.
Our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.
arXiv Detail & Related papers (2022-05-11T17:59:51Z) - Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion
Networks [63.596602299263935]
We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates.
We show that our method outperforms state-of-the-art methods in terms of prediction accuracy of mesh deformations by about 20% in RMSE and 10% in Hausdorff distance and STED.
arXiv Detail & Related papers (2022-05-03T07:54:39Z) - ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object
Manipulation [135.10594078615952]
We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects.
A benchmark contains over 17,000 action trajectories with six types of plush toys and 78 variants.
Our model achieves the best performance in geometry, correspondence, and dynamics predictions.
arXiv Detail & Related papers (2022-03-14T04:56:55Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.