Accurate Object Association and Pose Updating for Semantic SLAM
- URL: http://arxiv.org/abs/2012.11368v1
- Date: Mon, 21 Dec 2020 14:21:09 GMT
- Title: Accurate Object Association and Pose Updating for Semantic SLAM
- Authors: Kaiqi Chen, Jialing Liu, Jianhua Zhang, Zhenhua Wang
- Abstract summary: The proposed method is evaluated on a simulated sequence and several sequences in the Kitti dataset.
Experimental results show a very impressive improvement with respect to the traditional SLAM and the state-of-the-art semantic SLAM method.
- Score: 2.9602796547156323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays in the field of semantic SLAM, how to correctly use semantic
information for data association is still a problem worthy of study. The key to
solving this problem is to correctly associate multiple object measurements of
one object landmark, and refine the pose of object landmark. However, different
objects locating closely are prone to be associated as one object landmark, and
it is difficult to pick up a best pose from multiple object measurements
associated with one object landmark. To tackle these problems, we propose a
hierarchical object association strategy by means of multiple object tracking,
through which closing objects will be correctly associated to different object
landmarks, and an approach to refine the pose of object landmark from multiple
object measurements. The proposed method is evaluated on a simulated sequence
and several sequences in the Kitti dataset. Experimental results show a very
impressive improvement with respect to the traditional SLAM and the
state-of-the-art semantic SLAM method.
Related papers
- 1st Place Solution for MOSE Track in CVPR 2024 PVUW Workshop: Complex Video Object Segmentation [72.54357831350762]
We propose a semantic embedding video object segmentation model and use the salient features of objects as query representations.
We trained our model on a large-scale video object segmentation dataset.
Our model achieves first place (textbf84.45%) in the test set of Complex Video Object Challenge.
arXiv Detail & Related papers (2024-06-07T03:13:46Z) - Retrieval Robust to Object Motion Blur [54.34823913494456]
We propose a method for object retrieval in images that are affected by motion blur.
We present the first large-scale datasets for blurred object retrieval.
Our method outperforms state-of-the-art retrieval methods on the new blur-retrieval datasets.
arXiv Detail & Related papers (2024-04-27T23:22:39Z) - Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding [77.26626173589746]
We present the Multi-view Approach to Grounding in Context (MAGiC)
It selects an object referent based on language that distinguishes between two similar objects.
It improves over the state-of-the-art model on the SNARE object reference task with a relative error reduction of 12.9%.
arXiv Detail & Related papers (2023-11-12T00:21:58Z) - An Object SLAM Framework for Association, Mapping, and High-Level Tasks [12.62957558651032]
We present a comprehensive object SLAM framework that focuses on object-based perception and object-oriented robot tasks.
A range of public datasets and real-world results have been used to evaluate the proposed object SLAM framework for its efficient performance.
arXiv Detail & Related papers (2023-05-12T08:10:14Z) - Loop Closure Detection Based on Object-level Spatial Layout and Semantic
Consistency [14.694754836704819]
We present an object-based loop closure detection method based on the spatial layout and semanic consistency of the 3D scene graph.
Experimental results demonstrate that our proposed data association approach can construct more accurate 3D semantic maps.
arXiv Detail & Related papers (2023-04-11T11:20:51Z) - Image Segmentation-based Unsupervised Multiple Objects Discovery [1.7674345486888503]
Unsupervised object discovery aims to localize objects in images.
We propose a fully unsupervised, bottom-up approach, for multiple objects discovery.
We provide state-of-the-art results for both unsupervised class-agnostic object detection and unsupervised image segmentation.
arXiv Detail & Related papers (2022-12-20T09:48:24Z) - FewSOL: A Dataset for Few-Shot Object Learning in Robotic Environments [21.393674766169543]
We introduce the Few-Shot Object Learning dataset for object recognition with a few images per object.
We captured 336 real-world objects with 9 RGB-D images per object from different views.
The evaluation results show that there is still a large margin to be improved for few-shot object classification in robotic environments.
arXiv Detail & Related papers (2022-07-06T05:57:24Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Contrastive Object Detection Using Knowledge Graph Embeddings [72.17159795485915]
We compare the error statistics of the class embeddings learned from a one-hot approach with semantically structured embeddings from natural language processing or knowledge graphs.
We propose a knowledge-embedded design for keypoint-based and transformer-based object detection architectures.
arXiv Detail & Related papers (2021-12-21T17:10:21Z) - Objects are Different: Flexible Monocular 3D Object Detection [87.82253067302561]
We propose a flexible framework for monocular 3D object detection which explicitly decouples the truncated objects and adaptively combines multiple approaches for object depth estimation.
Experiments demonstrate that our method outperforms the state-of-the-art method by relatively 27% for the moderate level and 30% for the hard level in the test set of KITTI benchmark.
arXiv Detail & Related papers (2021-04-06T07:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.