Cross-View Multi-Modal Segmentation @ Ego-Exo4D Challenges 2025
- URL: http://arxiv.org/abs/2506.05856v1
- Date: Fri, 06 Jun 2025 08:23:39 GMT
- Title: Cross-View Multi-Modal Segmentation @ Ego-Exo4D Challenges 2025
- Authors: Yuqian Fu, Runze Wang, Yanwei Fu, Danda Pani Paudel, Luc Van Gool,
- Abstract summary: Given object queries from one perspective, the goal is to predict the corresponding object masks in another perspective.<n>To tackle this task, we propose a multimodal condition fusion module that enhances object localization.<n>Our proposed method ranked second on the leaderboard of the large-scale Ego-Exo4D object correspondence benchmark.
- Score: 93.36604217487526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this report, we present a cross-view multi-modal object segmentation approach for the object correspondence task in the Ego-Exo4D Correspondence Challenges 2025. Given object queries from one perspective (e.g., ego view), the goal is to predict the corresponding object masks in another perspective (e.g., exo view). To tackle this task, we propose a multimodal condition fusion module that enhances object localization by leveraging both visual masks and textual descriptions as segmentation conditions. Furthermore, to address the visual domain gap between ego and exo views, we introduce a cross-view object alignment module that enforces object-level consistency across perspectives, thereby improving the model's robustness to viewpoint changes. Our proposed method ranked second on the leaderboard of the large-scale Ego-Exo4D object correspondence benchmark. Code will be made available at https://github.com/lovelyqian/ObjectRelator.
Related papers
- DOMR: Establishing Cross-View Segmentation via Dense Object Matching [18.965608205456338]
Cross-view object correspondence involves matching objects between egocentric (first-person) and exocentric (third-person) views.<n>We propose the Dense Object Matching and Refinement (DOMR) framework to establish dense object correspondences across views.
arXiv Detail & Related papers (2025-08-06T03:23:37Z) - RefCut: Interactive Segmentation with Reference Guidance [44.872055134890864]
RefCut is a reference-based interactive segmentation framework to address part ambiguity and object ambiguity.<n>Our code will be publicly available and the demo video is in https://www.lin-zheng.com/refcut.
arXiv Detail & Related papers (2025-03-22T17:14:20Z) - ObjectRelator: Enabling Cross-View Object Relation Understanding in Ego-Centric and Exo-Centric Videos [105.40690994956667]
Ego-Exo Object Correspondence task aims to map objects across ego-centric and exo-centric views.<n>We introduce ObjectRelator, a novel method designed to tackle this task.
arXiv Detail & Related papers (2024-11-28T12:01:03Z) - Rethinking Amodal Video Segmentation from Learning Supervised Signals
with Object-centric Representation [47.39455910191075]
Video amodal segmentation is a challenging task in computer vision.
Recent studies have achieved promising performance by using motion flow to integrate information across frames under a self-supervised setting.
This paper presents a rethinking to previous works. We particularly leverage the supervised signals with object-centric representation.
arXiv Detail & Related papers (2023-09-23T04:12:02Z) - Segmenting Moving Objects via an Object-Centric Layered Representation [100.26138772664811]
We introduce an object-centric segmentation model with a depth-ordered layer representation.
We introduce a scalable pipeline for generating synthetic training data with multiple objects.
We evaluate the model on standard video segmentation benchmarks.
arXiv Detail & Related papers (2022-07-05T17:59:43Z) - Multi-modal Transformers Excel at Class-agnostic Object Detection [105.10403103027306]
We argue that existing methods lack a top-down supervision signal governed by human-understandable semantics.
We develop an efficient and flexible MViT architecture using multi-scale feature processing and deformable self-attention.
We show the significance of MViT proposals in a diverse range of applications.
arXiv Detail & Related papers (2021-11-22T18:59:29Z) - Improving Semantic Segmentation via Decoupled Body and Edge Supervision [89.57847958016981]
Existing semantic segmentation approaches either aim to improve the object's inner consistency by modeling the global context, or refine objects detail along their boundaries by multi-scale feature fusion.
In this paper, a new paradigm for semantic segmentation is proposed.
Our insight is that appealing performance of semantic segmentation requires textitexplicitly modeling the object textitbody and textitedge, which correspond to the high and low frequency of the image.
We show that the proposed framework with various baselines or backbone networks leads to better object inner consistency and object boundaries.
arXiv Detail & Related papers (2020-07-20T12:11:22Z) - Object-Centric Image Generation from Layouts [93.10217725729468]
We develop a layout-to-image-generation method to generate complex scenes with multiple objects.
Our method learns representations of the spatial relationships between objects in the scene, which lead to our model's improved layout-fidelity.
We introduce SceneFID, an object-centric adaptation of the popular Fr'echet Inception Distance metric, that is better suited for multi-object images.
arXiv Detail & Related papers (2020-03-16T21:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.