Visual Vibration Tomography: Estimating Interior Material Properties
from Monocular Video
- URL: http://arxiv.org/abs/2104.02735v4
- Date: Sun, 23 Apr 2023 21:20:04 GMT
- Title: Visual Vibration Tomography: Estimating Interior Material Properties
from Monocular Video
- Authors: Berthy T. Feng, Alexander C. Ogren, Chiara Daraio, Katherine L. Bouman
- Abstract summary: An object's interior material properties, while invisible to the human eye, determine motion observed on its surface.
We propose an approach that estimates heterogeneous material properties of an object from a monocular video of its surface vibrations.
- Score: 66.94502090429806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An object's interior material properties, while invisible to the human eye,
determine motion observed on its surface. We propose an approach that estimates
heterogeneous material properties of an object from a monocular video of its
surface vibrations. Specifically, we show how to estimate Young's modulus and
density throughout a 3D object with known geometry. Knowledge of how these
values change across the object is useful for simulating its motion and
characterizing any defects. Traditional non-destructive testing approaches,
which often require expensive instruments, generally estimate only homogenized
material properties or simply identify the presence of defects. In contrast,
our approach leverages monocular video to (1) identify image-space modes from
an object's sub-pixel motion, and (2) directly infer spatially-varying Young's
modulus and density values from the observed modes. We demonstrate our approach
on both simulated and real videos.
Related papers
- Physical Property Understanding from Language-Embedded Feature Fields [27.151380830258603]
We present a novel approach for dense prediction of the physical properties of objects using a collection of images.
Inspired by how humans reason about physics through vision, we leverage large language models to propose candidate materials for each object.
Our method is accurate, annotation-free, and applicable to any object in the open world.
arXiv Detail & Related papers (2024-04-05T17:45:07Z) - Visual Looming from Motion Field and Surface Normals [0.0]
Looming, traditionally defined as the relative expansion of objects in the observer's retina, is a fundamental visual cue for perception of threat and can be used to accomplish collision free navigation.
We derive novel solutions for obtaining visual looming quantitatively from the 2D motion field resulting from a six-degree-of-freedom motion of an observer relative to a local surface in 3D.
We present novel methods to estimate visual looming from spatial derivatives of optical flow without the need for knowing range.
arXiv Detail & Related papers (2022-10-08T21:36:49Z) - Spatio-Temporal Relation Learning for Video Anomaly Detection [35.59510027883497]
Anomaly identification is highly dependent on the relationship between the object and the scene.
In this paper, we propose a Spatial-Temporal Relation Learning framework to tackle the video anomaly detection task.
Experiments are conducted on three public datasets, and the superior performance over the state-of-the-art methods demonstrates the effectiveness of our method.
arXiv Detail & Related papers (2022-09-27T02:19:31Z) - ComPhy: Compositional Physical Reasoning of Objects and Events from
Videos [113.2646904729092]
The compositionality between the visible and hidden properties poses unique challenges for AI models to reason from the physical world.
Existing studies on video reasoning mainly focus on visually observable elements such as object appearance, movement, and contact interaction.
We propose an oracle neural-symbolic framework named Compositional Physics Learner (CPL), combining visual perception, physical property learning, dynamic prediction, and symbolic execution.
arXiv Detail & Related papers (2022-05-02T17:59:13Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Identifying Mechanical Models through Differentiable Simulations [16.86640234046472]
This paper proposes a new method for manipulating unknown objects through a sequence of non-prehensile actions.
The proposed method leverages recent progress in differentiable physics models to identify unknown mechanical properties of manipulated objects.
arXiv Detail & Related papers (2020-05-11T20:19:20Z) - Occlusion resistant learning of intuitive physics from videos [52.25308231683798]
Key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation.
This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences.
arXiv Detail & Related papers (2020-04-30T19:35:54Z) - Visual Grounding of Learned Physical Models [66.04898704928517]
Humans intuitively recognize objects' physical properties and predict their motion, even when the objects are engaged in complicated interactions.
We present a neural model that simultaneously reasons about physics and makes future predictions based on visual and dynamics priors.
Experiments show that our model can infer the physical properties within a few observations, which allows the model to quickly adapt to unseen scenarios and make accurate predictions into the future.
arXiv Detail & Related papers (2020-04-28T17:06:38Z) - Cloth in the Wind: A Case Study of Physical Measurement through
Simulation [50.31424339972478]
We propose to measure latent physical properties for cloth in the wind without ever having seen a real example before.
Our solution is an iterative refinement procedure with simulation at its core.
The correspondence is measured using an embedding function that maps physically similar examples to nearby points.
arXiv Detail & Related papers (2020-03-09T21:32:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.