Foldover Features for Dynamic Object Behavior Description in Microscopic
Videos
- URL: http://arxiv.org/abs/2003.08628v2
- Date: Sat, 21 Mar 2020 02:49:32 GMT
- Title: Foldover Features for Dynamic Object Behavior Description in Microscopic
Videos
- Authors: Xialin Li, Chen Li and Wenwei Zhao
- Abstract summary: We propose foldover features to describe the behavior of dynamic objects in microscopic videos.
In the experiment, we use a sperm microscopic video dataset to evaluate the proposed foldover features, including three types of 1374 sperms.
- Score: 4.194890536348037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Behavior description is conducive to the analysis of tiny objects, similar
objects, objects with weak visual information and objects with similar visual
information, playing a fundamental role in the identification and
classification of dynamic objects in microscopic videos. To this end, we
propose foldover features to describe the behavior of dynamic objects. First,
we generate foldover for each object in microscopic videos in X, Y and Z
directions, respectively. Then, we extract foldover features from the X, Y and
Z directions with statistical methods, respectively. Finally, we use four
different classifiers to test the effectiveness of the proposed foldover
features. In the experiment, we use a sperm microscopic video dataset to
evaluate the proposed foldover features, including three types of 1374 sperms,
and obtain the highest classification accuracy of 96.5%.
Related papers
- 1st Place Solution for MOSE Track in CVPR 2024 PVUW Workshop: Complex Video Object Segmentation [72.54357831350762]
We propose a semantic embedding video object segmentation model and use the salient features of objects as query representations.
We trained our model on a large-scale video object segmentation dataset.
Our model achieves first place (textbf84.45%) in the test set of Complex Video Object Challenge.
arXiv Detail & Related papers (2024-06-07T03:13:46Z) - Interactive Learning of Physical Object Properties Through Robot Manipulation and Database of Object Measurements [20.301193437161867]
The framework involves exploratory action selection to maximize learning about objects on a table.
A robot pipeline integrates with a logging module and an online database of objects, containing over 24,000 measurements of 63 objects with different grippers.
arXiv Detail & Related papers (2024-04-10T20:59:59Z) - A General Protocol to Probe Large Vision Models for 3D Physical Understanding [84.54972153436466]
We introduce a general protocol to evaluate whether features of an off-the-shelf large vision model encode a number of physical 'properties' of the 3D scene.
We apply this protocol to properties covering scene geometry, scene material, support relations, lighting, and view-dependent measures.
We find that features from Stable Diffusion and DINOv2 are good for discriminative learning of a number of properties.
arXiv Detail & Related papers (2023-10-10T17:59:28Z) - Learning Dynamic Attribute-factored World Models for Efficient
Multi-object Reinforcement Learning [6.447052211404121]
In many reinforcement learning tasks, the agent has to learn to interact with many objects of different types and generalize to unseen combinations and numbers of objects.
Recent works have shown the benefits of object-factored representations and hierarchical abstractions for improving sample efficiency.
We introduce the Dynamic Attribute FacTored RL (DAFT-RL) framework to exploit the benefits of factorization in terms of object attributes.
arXiv Detail & Related papers (2023-07-18T12:41:28Z) - Adaptive Multi-source Predictor for Zero-shot Video Object Segmentation [68.56443382421878]
We propose a novel adaptive multi-source predictor for zero-shot video object segmentation (ZVOS)
In the static object predictor, the RGB source is converted to depth and static saliency sources, simultaneously.
Experiments show that the proposed model outperforms the state-of-the-art methods on three challenging ZVOS benchmarks.
arXiv Detail & Related papers (2023-03-18T10:19:29Z) - CRIPP-VQA: Counterfactual Reasoning about Implicit Physical Properties
via Video Question Answering [50.61988087577871]
We introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene.
CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning.
Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties.
arXiv Detail & Related papers (2022-11-07T18:55:26Z) - Is an Object-Centric Video Representation Beneficial for Transfer? [86.40870804449737]
We introduce a new object-centric video recognition model on a transformer architecture.
We show that the object-centric model outperforms prior video representations.
arXiv Detail & Related papers (2022-07-20T17:59:44Z) - Generalization and Robustness Implications in Object-Centric Learning [23.021791024676986]
In this paper, we train state-of-the-art unsupervised models on five common multi-object datasets.
From our experimental study, we find object-centric representations to be generally useful for downstream tasks.
arXiv Detail & Related papers (2021-07-01T17:51:11Z) - Object Priors for Classifying and Localizing Unseen Actions [45.91275361696107]
We propose three spatial object priors, which encode local person and object detectors along with their spatial relations.
On top we introduce three semantic object priors, which extend semantic matching through word embeddings.
A video embedding combines the spatial and semantic object priors.
arXiv Detail & Related papers (2021-04-10T08:56:58Z) - Visual Vibration Tomography: Estimating Interior Material Properties
from Monocular Video [66.94502090429806]
An object's interior material properties, while invisible to the human eye, determine motion observed on its surface.
We propose an approach that estimates heterogeneous material properties of an object from a monocular video of its surface vibrations.
arXiv Detail & Related papers (2021-04-06T18:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.