Multi-view Point Cloud Registration based on Evolutionary Multitasking
with Bi-Channel Knowledge Sharing Mechanism
- URL: http://arxiv.org/abs/2205.02996v1
- Date: Fri, 6 May 2022 03:26:16 GMT
- Title: Multi-view Point Cloud Registration based on Evolutionary Multitasking
with Bi-Channel Knowledge Sharing Mechanism
- Authors: Yue Wu, Yibo Liu, Maoguo Gong, Hao Li, Zedong Tang, Qiguang Miao,
Wenping Ma
- Abstract summary: This paper models the registration problem as multi-task optimization.
It proposes a novel bi-channel knowledge sharing mechanism for effective and efficient problem solving.
- Score: 29.232021965321408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Registration of multi-view point clouds is fundamental in 3D reconstruction.
Since there are close connections between point clouds captured from different
viewpoints, registration performance can be enhanced if these connections be
harnessed properly. Therefore, this paper models the registration problem as
multi-task optimization, and proposes a novel bi-channel knowledge sharing
mechanism for effective and efficient problem solving. The modeling of
multi-view point cloud registration as multi-task optimization are twofold. By
simultaneously considering the local accuracy of two point clouds as well as
the global consistency posed by all the point clouds involved, a fitness
function with an adaptive threshold is derived. Also a framework of the
co-evolutionary search process is defined for the concurrent optimization of
multiple fitness functions belonging to related tasks. To enhance solution
quality and convergence speed, the proposed bi-channel knowledge sharing
mechanism plays its role. The intra-task knowledge sharing introduces aiding
tasks that are much simpler to solve, and useful information is shared within
tasks, accelerating the search process. The inter-task knowledge sharing
explores commonalities buried among tasks, aiming to prevent tasks from getting
stuck to local optima. Comprehensive experiments conducted on model object as
well as scene point clouds show the efficacy of the proposed method.
Related papers
- RepVF: A Unified Vector Fields Representation for Multi-task 3D Perception [64.80760846124858]
This paper proposes a novel unified representation, RepVF, which harmonizes the representation of various perception tasks.
RepVF characterizes the structure of different targets in the scene through a vector field, enabling a single-head, multi-task learning model.
Building upon RepVF, we introduce RFTR, a network designed to exploit the inherent connections between different tasks.
arXiv Detail & Related papers (2024-07-15T16:25:07Z) - A Point-Based Approach to Efficient LiDAR Multi-Task Perception [49.91741677556553]
PAttFormer is an efficient multi-task architecture for joint semantic segmentation and object detection in point clouds.
Unlike other LiDAR-based multi-task architectures, our proposed PAttFormer does not require separate feature encoders for task-specific point cloud representations.
Our evaluations show substantial gains from multi-task learning, improving LiDAR semantic segmentation by +1.7% in mIou and 3D object detection by +1.7% in mAP.
arXiv Detail & Related papers (2024-04-19T11:24:34Z) - Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight Detection [7.864892339833315]
We propose a novel task-driven top-down framework for joint moment retrieval and highlight detection.
The framework introduces a task-decoupled unit to capture task-specific and common representations.
Comprehensive experiments and in-depth ablation studies on QVHighlights, TVSum, and Charades-STA datasets corroborate the effectiveness and flexibility of the proposed framework.
arXiv Detail & Related papers (2024-04-14T14:06:42Z) - A Dynamic Feature Interaction Framework for Multi-task Visual Perception [100.98434079696268]
We devise an efficient unified framework to solve multiple common perception tasks.
These tasks include instance segmentation, semantic segmentation, monocular 3D detection, and depth estimation.
Our proposed framework, termed D2BNet, demonstrates a unique approach to parameter-efficient predictions for multi-task perception.
arXiv Detail & Related papers (2023-06-08T09:24:46Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Evolutionary Multitasking with Solution Space Cutting for Point Cloud
Registration [20.247335152837437]
This study proposes a novel evolving registration algorithm via EMTO, where the multi-task configuration is based on the idea of solution space cutting.
Compared with 8 evolving approaches, 4 traditional approaches and 3 deep learning approaches on the object-scale and scene-scale registration datasets, experimental results demonstrate that the proposed method has superior performances in terms of precision and tackling local optima.
arXiv Detail & Related papers (2022-12-12T03:32:05Z) - DenseMTL: Cross-task Attention Mechanism for Dense Multi-task Learning [18.745373058797714]
We propose a novel multi-task learning architecture that leverages pairwise cross-task exchange through correlation-guided attention and self-attention.
We conduct extensive experiments across three multi-task setups, showing the advantages of our approach compared to competitive baselines in both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2022-06-17T17:59:45Z) - Sign-regularized Multi-task Learning [13.685061061742523]
Multi-task learning is a framework that enforces different learning tasks to share their knowledge to improve their performance.
It strives to handle several core issues; particularly, which tasks are correlated and similar, and how to share the knowledge among correlated tasks.
arXiv Detail & Related papers (2021-02-22T17:11:15Z) - Decoupled and Memory-Reinforced Networks: Towards Effective Feature
Learning for One-Step Person Search [65.51181219410763]
One-step methods have been developed to handle pedestrian detection and identification sub-tasks using a single network.
There are two major challenges in the current one-step approaches.
We propose a decoupled and memory-reinforced network (DMRNet) to overcome these problems.
arXiv Detail & Related papers (2021-02-22T06:19:45Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z) - Distributed Primal-Dual Optimization for Online Multi-Task Learning [22.45069527817333]
We propose an adaptive primal-dual algorithm, which captures task-specific noise in adversarial learning and carries out a projection-free update with runtime efficiency.
Our model is well-suited to decentralized periodic-connected tasks as it allows the energy-starved or bandwidth-constraint tasks to postpone the update.
Empirical results confirm that the proposed model is highly effective on various real-world datasets.
arXiv Detail & Related papers (2020-04-02T23:36:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.