Can Large Language Models Grasp Event Signals? Exploring Pure Zero-Shot Event-based Recognition
- URL: http://arxiv.org/abs/2409.09628v1
- Date: Sun, 15 Sep 2024 06:43:03 GMT
- Title: Can Large Language Models Grasp Event Signals? Exploring Pure Zero-Shot Event-based Recognition
- Authors: Zongyou Yu, Qiang Qu, Xiaoming Chen, Chen Wang,
- Abstract summary: This research is the first study to explore the understanding capabilities of large language models for event-based visual content.
We demonstrate that LLMs can achieve event-based object recognition without additional training or fine-tuning in conjunction with CLIP.
Specifically, we evaluate the ability of GPT-4o / 4 and two other open-source LLMs to directly recognize event-based visual content.
- Score: 11.581367800115606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in event-based zero-shot object recognition have demonstrated promising results. However, these methods heavily depend on extensive training and are inherently constrained by the characteristics of CLIP. To the best of our knowledge, this research is the first study to explore the understanding capabilities of large language models (LLMs) for event-based visual content. We demonstrate that LLMs can achieve event-based object recognition without additional training or fine-tuning in conjunction with CLIP, effectively enabling pure zero-shot event-based recognition. Particularly, we evaluate the ability of GPT-4o / 4turbo and two other open-source LLMs to directly recognize event-based visual content. Extensive experiments are conducted across three benchmark datasets, systematically assessing the recognition accuracy of these models. The results show that LLMs, especially when enhanced with well-designed prompts, significantly improve event-based zero-shot recognition performance. Notably, GPT-4o outperforms the compared models and exceeds the recognition accuracy of state-of-the-art event-based zero-shot methods on N-ImageNet by five orders of magnitude. The implementation of this paper is available at \url{https://github.com/ChrisYu-Zz/Pure-event-based-recognition-based-LLM}.
Related papers
- Label-Guided In-Context Learning for Named Entity Recognition [14.63059248497416]
In-context learning (ICL) enables large language models to perform new tasks using only a few demonstrations.<n>We introduce DEER, a new method that leverages training labels through token-level statistics to improve ICL performance.
arXiv Detail & Related papers (2025-05-29T17:54:32Z) - LLM-EvRep: Learning an LLM-Compatible Event Representation Using a Self-Supervised Framework [11.30784253260618]
Large language models (LLMs) have exhibited remarkable zero-shot capabilities across diverse domains.
We propose textbfLLM-EvGen, an event representation generator that produces event representations textbfLLM-EvRep
Comprehensive experiments were conducted on three datasets: N-ImageNet, N-Caltech101, and N-MNIST.
arXiv Detail & Related papers (2025-02-20T05:18:36Z) - CLLMFS: A Contrastive Learning enhanced Large Language Model Framework for Few-Shot Named Entity Recognition [3.695767900907561]
CLLMFS is a Contrastive Learning enhanced Large Language Model framework for Few-Shot Named Entity Recognition.
It integrates Low-Rank Adaptation (LoRA) and contrastive learning mechanisms specifically tailored for few-shot NER.
Our method has achieved state-of-the-art performance improvements on F1-score ranging from 2.58% to 97.74% over existing best-performing methods.
arXiv Detail & Related papers (2024-08-23T04:44:05Z) - Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - RAR: Retrieving And Ranking Augmented MLLMs for Visual Recognition [78.97487780589574]
Multimodal Large Language Models (MLLMs) excel at classifying fine-grained categories.
This paper introduces a Retrieving And Ranking augmented method for MLLMs.
Our proposed approach not only addresses the inherent limitations in fine-grained recognition but also preserves the model's comprehensive knowledge base.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - Improving Input-label Mapping with Demonstration Replay for In-context
Learning [67.57288926736923]
In-context learning (ICL) is an emerging capability of large autoregressive language models.
We propose a novel ICL method called Sliding Causal Attention (RdSca)
We show that our method significantly improves the input-label mapping in ICL demonstrations.
arXiv Detail & Related papers (2023-10-30T14:29:41Z) - Retrieval-Enhanced Contrastive Vision-Text Models [61.783728119255365]
We propose to equip vision-text models with the ability to refine their embedding with cross-modal retrieved information from a memory at inference time.
Remarkably, we show that this can be done with a light-weight, single-layer, fusion transformer on top of a frozen CLIP.
Our experiments validate that our retrieval-enhanced contrastive (RECO) training improves CLIP performance substantially on several challenging fine-grained tasks.
arXiv Detail & Related papers (2023-06-12T15:52:02Z) - EventCLIP: Adapting CLIP for Event-based Object Recognition [26.35633454924899]
EventCLIP is a novel approach that utilizes CLIP for zero-shot and few-shot event-based object recognition.
We first generalize CLIP's image encoder to event data by converting raw events to 2D grid-based representations.
We evaluate EventCLIP on N-Caltech, N-Cars, and N-ImageNet datasets, achieving state-of-the-art few-shot performance.
arXiv Detail & Related papers (2023-06-10T06:05:35Z) - Face Recognition in the age of CLIP & Billion image datasets [0.0]
We evaluate the performance of various CLIP models as zero-shot face recognizers.
We also investigate the robustness of CLIP models against data poisoning attacks.
arXiv Detail & Related papers (2023-01-18T05:34:57Z) - Learning Customized Visual Models with Retrieval-Augmented Knowledge [104.05456849611895]
We propose REACT, a framework to acquire the relevant web knowledge to build customized visual models for target domains.
We retrieve the most relevant image-text pairs from the web-scale database as external knowledge, and propose to customize the model by only training new modualized blocks while freezing all the original weights.
The effectiveness of REACT is demonstrated via extensive experiments on classification, retrieval, detection and segmentation tasks, including zero, few, and full-shot settings.
arXiv Detail & Related papers (2023-01-17T18:59:06Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.