Virtual Elastic Objects
- URL: http://arxiv.org/abs/2201.04623v1
- Date: Wed, 12 Jan 2022 18:59:03 GMT
- Title: Virtual Elastic Objects
- Authors: Hsiao-yu Chen and Edgar Tretschk and Tuur Stuyck and Petr Kadlecek and
Ladislav Kavan and Etienne Vouga and Christoph Lassner
- Abstract summary: We build virtual objects that behave like their real-world counterparts, even when subject to novel interactions.
We use a differentiable, particle-based simulator to use deformation fields to find representative material parameters.
We present our results using a dataset of 12 objects under a variety of force fields, which will be shared with the community.
- Score: 18.228492027143307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Virtual Elastic Objects (VEOs): virtual objects that not only look
like their real-world counterparts but also behave like them, even when subject
to novel interactions. Achieving this presents multiple challenges: not only do
objects have to be captured including the physical forces acting on them, then
faithfully reconstructed and rendered, but also plausible material parameters
found and simulated. To create VEOs, we built a multi-view capture system that
captures objects under the influence of a compressed air stream. Building on
recent advances in model-free, dynamic Neural Radiance Fields, we reconstruct
the objects and corresponding deformation fields. We propose to use a
differentiable, particle-based simulator to use these deformation fields to
find representative material parameters, which enable us to run new
simulations. To render simulated objects, we devise a method for integrating
the simulation results with Neural Radiance Fields. The resulting method is
applicable to a wide range of scenarios: it can handle objects composed of
inhomogeneous material, with very different shapes, and it can simulate
interactions with other virtual objects. We present our results using a newly
collected dataset of 12 objects under a variety of force fields, which will be
shared with the community.
Related papers
- PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation [62.53760963292465]
PhysDreamer is a physics-based approach that endows static 3D objects with interactive dynamics.
We present our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study.
arXiv Detail & Related papers (2024-04-19T17:41:05Z) - Reconstructing Objects in-the-wild for Realistic Sensor Simulation [41.55571880832957]
We present NeuSim, a novel approach that estimates accurate geometry and realistic appearance from sparse in-the-wild data.
We model the object appearance with a robust physics-inspired reflectance representation effective for in-the-wild data.
Our experiments show that NeuSim has strong view synthesis performance on challenging scenarios with sparse training views.
arXiv Detail & Related papers (2023-11-09T18:58:22Z) - Benchmarking the Sim-to-Real Gap in Cloth Manipulation [10.530012817995656]
We present a benchmark dataset to evaluate the sim-to-real gap in cloth manipulation.
We use the dataset to evaluate the reality gap, computational time, and stability of four popular deformable object simulators.
arXiv Detail & Related papers (2023-10-14T09:36:01Z) - Physics-Based Rigid Body Object Tracking and Friction Filtering From RGB-D Videos [8.012771454339353]
We propose a novel approach for real-to-sim which tracks rigid objects in 3D from RGB-D images and infers physical properties of the objects.
We demonstrate and evaluate our approach on a real-world dataset.
arXiv Detail & Related papers (2023-09-27T14:46:01Z) - Differentiable Physics Simulation of Dynamics-Augmented Neural Objects [40.587385809005355]
We present a differentiable pipeline for simulating the motion of objects that represent their geometry as a continuous density field parameterized as a deep network.
We estimate the dynamical properties of the object, including its mass, center of mass, and inertia matrix.
This allows a robot to autonomously build object models that are visually and dynamically accurate from still images and videos of objects in motion.
arXiv Detail & Related papers (2022-10-17T20:37:46Z) - Finding Fallen Objects Via Asynchronous Audio-Visual Integration [89.75296559813437]
This paper introduces a setting in which to study multi-modal object localization in 3D virtual environments.
An embodied robot agent, equipped with a camera and microphone, must determine what object has been dropped -- and where -- by combining audio and visual signals with knowledge of the underlying physics.
The dataset uses the ThreeDWorld platform which can simulate physics-based impact sounds and complex physical interactions between objects in a photorealistic setting.
arXiv Detail & Related papers (2022-07-07T17:59:59Z) - DiffCloud: Real-to-Sim from Point Clouds with Differentiable Simulation
and Rendering of Deformable Objects [18.266002992029716]
Research in manipulation of deformable objects is typically conducted on a limited range of scenarios.
Realistic simulators with support for various types of deformations and interactions have the potential to speed up experimentation.
For highly deformable objects it is challenging to align the output of a simulator with the behavior of real objects.
arXiv Detail & Related papers (2022-04-07T00:45:26Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - GeoSim: Photorealistic Image Simulation with Geometry-Aware Composition [81.24107630746508]
We present GeoSim, a geometry-aware image composition process that synthesizes novel urban driving scenes.
We first build a diverse bank of 3D objects with both realistic geometry and appearance from sensor data.
The resulting synthetic images are photorealistic, traffic-aware, and geometrically consistent, allowing image simulation to scale to complex use cases.
arXiv Detail & Related papers (2021-01-16T23:00:33Z) - RELATE: Physically Plausible Multi-Object Scene Synthesis Using
Structured Latent Spaces [77.07767833443256]
We present RELATE, a model that learns to generate physically plausible scenes and videos of multiple interacting objects.
In contrast to state-of-the-art methods in object-centric generative modeling, RELATE also extends naturally to dynamic scenes and generates videos of high visual fidelity.
arXiv Detail & Related papers (2020-07-02T17:27:27Z) - Occlusion resistant learning of intuitive physics from videos [52.25308231683798]
Key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation.
This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences.
arXiv Detail & Related papers (2020-04-30T19:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.