Object condensation: one-stage grid-free multi-object reconstruction in
physics detectors, graph and image data
- URL: http://arxiv.org/abs/2002.03605v3
- Date: Sun, 27 Sep 2020 07:48:16 GMT
- Title: Object condensation: one-stage grid-free multi-object reconstruction in
physics detectors, graph and image data
- Authors: Jan Kieseler
- Abstract summary: A new object condensation method is proposed for detector signals.
The method is based on non-image-like data structures, such as graphs and point clouds.
It is applied to a simple object classification problem in images and used to reconstruct multiple particles from detector signals.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-energy physics detectors, images, and point clouds share many
similarities in terms of object detection. However, while detecting an unknown
number of objects in an image is well established in computer vision, even
machine learning assisted object reconstruction algorithms in particle physics
almost exclusively predict properties on an object-by-object basis. Traditional
approaches from computer vision either impose implicit constraints on the
object size or density and are not well suited for sparse detector data or rely
on objects being dense and solid. The object condensation method proposed here
is independent of assumptions on object size, sorting or object density, and
further generalises to non-image-like data structures, such as graphs and point
clouds, which are more suitable to represent detector signals. The pixels or
vertices themselves serve as representations of the entire object, and a
combination of learnable local clustering in a latent space and confidence
assignment allows one to collect condensates of the predicted object properties
with a simple algorithm. As proof of concept, the object condensation method is
applied to a simple object classification problem in images and used to
reconstruct multiple particles from detector signals. The latter results are
also compared to a classic particle flow approach.
Related papers
- SINGAPO: Single Image Controlled Generation of Articulated Parts in Objects [20.978091381109294]
We propose a method to generate articulated objects from a single image.
Our method generates an articulated object that is visually consistent with the input image.
Our experiments show that our method outperforms the state-of-the-art in articulated object creation.
arXiv Detail & Related papers (2024-10-21T20:41:32Z) - Category-level Shape Estimation for Densely Cluttered Objects [94.64287790278887]
We propose a category-level shape estimation method for densely cluttered objects.
Our framework partitions each object in the clutter via the multi-view visual information fusion.
Experiments in the simulated environment and real world show that our method achieves high shape estimation accuracy.
arXiv Detail & Related papers (2023-02-23T13:00:17Z) - Object Detection in Aerial Images with Uncertainty-Aware Graph Network [61.02591506040606]
We propose a novel uncertainty-aware object detection framework with a structured-graph, where nodes and edges are denoted by objects.
We refer to our model as Uncertainty-Aware Graph network for object DETection (UAGDet)
arXiv Detail & Related papers (2022-08-23T07:29:03Z) - Automatic dataset generation for specific object detection [6.346581421948067]
We present a method to synthesize object-in-scene images, which can preserve the objects' detailed features without bringing irrelevant information.
Our result shows that in the synthesized image, the boundaries of objects blend very well with the background.
arXiv Detail & Related papers (2022-07-16T07:44:33Z) - Contrastive Object Detection Using Knowledge Graph Embeddings [72.17159795485915]
We compare the error statistics of the class embeddings learned from a one-hot approach with semantically structured embeddings from natural language processing or knowledge graphs.
We propose a knowledge-embedded design for keypoint-based and transformer-based object detection architectures.
arXiv Detail & Related papers (2021-12-21T17:10:21Z) - Unbiased IoU for Spherical Image Object Detection [45.17996641893818]
We first identify that spherical rectangles are unbiased bounding boxes for objects in spherical images, and then propose an analytical method for IoU calculation without any approximations.
Based on the unbiased representation and calculation, we also present an anchor free object detection algorithm for spherical images.
arXiv Detail & Related papers (2021-08-18T08:18:37Z) - Continuous Surface Embeddings [76.86259029442624]
We focus on the task of learning and representing dense correspondences in deformable object categories.
We propose a new, learnable image-based representation of dense correspondences.
We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans.
arXiv Detail & Related papers (2020-11-24T22:52:15Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Localizing Grouped Instances for Efficient Detection in Low-Resource
Scenarios [27.920304852537534]
We propose a novel flexible detection scheme that efficiently adapts to variable object sizes and densities.
We rely on a sequence of detection stages, each of which has the ability to predict groups of objects as well as individuals.
We report experimental results on two aerial image datasets, and show that the proposed method is as accurate yet computationally more efficient than standard single-shot detectors.
arXiv Detail & Related papers (2020-04-27T07:56:53Z) - Object-Centric Image Generation from Layouts [93.10217725729468]
We develop a layout-to-image-generation method to generate complex scenes with multiple objects.
Our method learns representations of the spatial relationships between objects in the scene, which lead to our model's improved layout-fidelity.
We introduce SceneFID, an object-centric adaptation of the popular Fr'echet Inception Distance metric, that is better suited for multi-object images.
arXiv Detail & Related papers (2020-03-16T21:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.