Multimodal Coreference Resolution for Chinese Social Media Dialogues: Dataset and Benchmark Approach
- URL: http://arxiv.org/abs/2504.14321v1
- Date: Sat, 19 Apr 2025 15:15:59 GMT
- Title: Multimodal Coreference Resolution for Chinese Social Media Dialogues: Dataset and Benchmark Approach
- Authors: Xingyu Li, Chen Gong, Guohong Fu,
- Abstract summary: Multimodal coreference resolution (MCR) aims to identify mentions referring to the same entity across different modalities.<n>We introduce TikTalkCoref, the first Chinese multimodal coreference dataset for social media in real-world scenarios.
- Score: 21.475881921929236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal coreference resolution (MCR) aims to identify mentions referring to the same entity across different modalities, such as text and visuals, and is essential for understanding multimodal content. In the era of rapidly growing mutimodal content and social media, MCR is particularly crucial for interpreting user interactions and bridging text-visual references to improve communication and personalization. However, MCR research for real-world dialogues remains unexplored due to the lack of sufficient data resources.To address this gap, we introduce TikTalkCoref, the first Chinese multimodal coreference dataset for social media in real-world scenarios, derived from the popular Douyin short-video platform. This dataset pairs short videos with corresponding textual dialogues from user comments and includes manually annotated coreference clusters for both person mentions in the text and the coreferential person head regions in the corresponding video frames. We also present an effective benchmark approach for MCR, focusing on the celebrity domain, and conduct extensive experiments on our dataset, providing reliable benchmark results for this newly constructed dataset. We will release the TikTalkCoref dataset to facilitate future research on MCR for real-world social media dialogues.
Related papers
- MSCRS: Multi-modal Semantic Graph Prompt Learning Framework for Conversational Recommender Systems [15.792566559456422]
Conversational Recommender Systems (CRS) aim to provide personalized recommendations by interacting with users through conversations.<n>We propose a multi-modal semantic graph prompt learning framework for CRS, named MSCRS.<n>We show that our proposed method significantly improves accuracy in item recommendation, as well as generates more natural and contextually relevant content in response generation.
arXiv Detail & Related papers (2025-04-15T07:05:22Z) - Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding [44.870165050047355]
Multi-modal multi-party conversation (MMC) is a less studied yet important topic of research.<n> MMC requires stronger character-centered understanding abilities as there are many interlocutors appearing in both the visual and textual context.<n>We present Friends-MMC, an MMC dataset that contains 24,000+ unique utterances paired with video context.
arXiv Detail & Related papers (2024-12-23T05:32:48Z) - Multimodal LLM Enhanced Cross-lingual Cross-modal Retrieval [40.83470534691711]
Cross-lingual cross-modal retrieval ( CCR) aims to retrieve visually relevant content based on non-English queries.
One popular approach involves utilizing machine translation (MT) to create pseudo-parallel data pairs.
We propose LE CCR, a novel solution that incorporates the multi-modal large language model (MLLM) to improve the alignment between visual and non-English representations.
arXiv Detail & Related papers (2024-09-30T05:25:51Z) - Text-Video Retrieval with Global-Local Semantic Consistent Learning [122.15339128463715]
We propose a simple yet effective method, Global-Local Semantic Consistent Learning (GLSCL)
GLSCL capitalizes on latent shared semantics across modalities for text-video retrieval.
Our method achieves comparable performance with SOTA as well as being nearly 220 times faster in terms of computational cost.
arXiv Detail & Related papers (2024-05-21T11:59:36Z) - J-CRe3: A Japanese Conversation Dataset for Real-world Reference Resolution [22.911318874589448]
In real-world reference resolution, a system must ground the verbal information that appears in user interactions to the visual information observed in egocentric views.
We propose a multimodal reference resolution task and construct a Japanese Conversation dataset for Real-world Reference Resolution (J-CRe3)
Our dataset contains egocentric video and dialogue audio of real-world conversations between two people acting as a master and an assistant robot at home.
arXiv Detail & Related papers (2024-03-28T09:32:43Z) - Information Screening whilst Exploiting! Multimodal Relation Extraction
with Feature Denoising and Multimodal Topic Modeling [96.75821232222201]
Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation.
We propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting.
arXiv Detail & Related papers (2023-05-19T14:56:57Z) - OCRBench: On the Hidden Mystery of OCR in Large Multimodal Models [122.27878464009181]
We conducted a comprehensive evaluation of Large Multimodal Models, such as GPT4V and Gemini, in various text-related visual tasks.
OCRBench contains 29 datasets, making it the most comprehensive OCR evaluation benchmark available.
arXiv Detail & Related papers (2023-05-13T11:28:37Z) - RoME: Role-aware Mixture-of-Expert Transformer for Text-to-Video
Retrieval [66.2075707179047]
We propose a novel mixture-of-expert transformer RoME that disentangles the text and the video into three levels.
We utilize a transformer-based attention mechanism to fully exploit visual and text embeddings at both global and local levels.
Our method outperforms the state-of-the-art methods on the YouCook2 and MSR-VTT datasets.
arXiv Detail & Related papers (2022-06-26T11:12:49Z) - Referring Image Segmentation via Cross-Modal Progressive Comprehension [94.70482302324704]
Referring image segmentation aims at segmenting the foreground masks of the entities that can well match the description given in the natural language expression.
Previous approaches tackle this problem using implicit feature interaction and fusion between visual and linguistic modalities.
We propose a Cross-Modal Progressive (CMPC) module and a Text-Guided Feature Exchange (TGFE) module to effectively address the challenging task.
arXiv Detail & Related papers (2020-10-01T16:02:30Z) - Modeling Topical Relevance for Multi-Turn Dialogue Generation [61.87165077442267]
We propose a new model, named STAR-BTM, to tackle the problem of topic drift in multi-turn dialogue.
The Biterm Topic Model is pre-trained on the whole training dataset. Then, the topic level attention weights are computed based on the topic representation of each context.
Experimental results on both Chinese customer services data and English Ubuntu dialogue data show that STAR-BTM significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2020-09-27T03:33:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.