Implicit Search Intent Recognition using EEG and Eye Tracking: Novel Dataset and Cross-User Prediction
- URL: http://arxiv.org/abs/2508.01860v1
- Date: Sun, 03 Aug 2025 17:27:32 GMT
- Title: Implicit Search Intent Recognition using EEG and Eye Tracking: Novel Dataset and Cross-User Prediction
- Authors: Mansi Sharma, Shuang Chen, Philipp Müller, Maurice Rekrut, Antonio Krüger,
- Abstract summary: We present the first method for cross-user prediction of search intents from EEG and eye-tracking recordings.<n>We reach 84.5% accuracy in leave-one-user-out evaluations.
- Score: 21.59167760456658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For machines to effectively assist humans in challenging visual search tasks, they must differentiate whether a human is simply glancing into a scene (navigational intent) or searching for a target object (informational intent). Previous research proposed combining electroencephalography (EEG) and eye-tracking measurements to recognize such search intents implicitly, i.e., without explicit user input. However, the applicability of these approaches to real-world scenarios suffers from two key limitations. First, previous work used fixed search times in the informational intent condition -- a stark contrast to visual search, which naturally terminates when the target is found. Second, methods incorporating EEG measurements addressed prediction scenarios that require ground truth training data from the target user, which is impractical in many use cases. We address these limitations by making the first publicly available EEG and eye-tracking dataset for navigational vs. informational intent recognition, where the user determines search times. We present the first method for cross-user prediction of search intents from EEG and eye-tracking recordings and reach 84.5% accuracy in leave-one-user-out evaluations -- comparable to within-user prediction accuracy (85.5%) but offering much greater flexibility
Related papers
- Distinguishing Target and Non-Target Fixations with EEG and Eye Tracking in Realistic Visual Scenes [20.53761110476627]
We investigate the classification of target vs. non-target fixations during free visual search in realistic scenes.<n>Our approach based on gaze and EEG features outperforms the previous state-of-the-art approach.
arXiv Detail & Related papers (2025-08-03T17:10:52Z) - Human Scanpath Prediction in Target-Present Visual Search with Semantic-Foveal Bayesian Attention [49.99728312519117]
SemBA-FAST is a top-down framework designed for predicting human visual attention in target-present visual search.<n>We evaluate SemBA-FAST on the COCO-Search18 benchmark dataset, comparing its performance against other scanpath prediction models.<n>These findings provide valuable insights into the capabilities of semantic-foveal probabilistic frameworks for human-like attention modelling.
arXiv Detail & Related papers (2025-07-24T15:19:23Z) - Active Visual Search in the Wild [12.354788629408933]
We propose a system where a user can enter target commands using free-form language.
We call this system Active Visual Search in the Wild (AVSW)
AVSW detects and plans to search for a target object inputted by a user through a semantic grid map represented by static landmarks.
arXiv Detail & Related papers (2022-09-19T07:18:46Z) - Target-absent Human Attention [44.10971508325032]
We propose the first data-driven computational model that addresses the search-termination problem.
We represent the internal knowledge that the viewer acquires through fixations using a novel state representation.
We improve the state of the art in predicting human target-absent search behavior on the COCO-Search18 dataset.
arXiv Detail & Related papers (2022-07-04T02:32:04Z) - E^2TAD: An Energy-Efficient Tracking-based Action Detector [78.90585878925545]
This paper presents a tracking-based solution to accurately and efficiently localize predefined key actions.
It won first place in the UAV-Video Track of 2021 Low-Power Computer Vision Challenge (LPCVC)
arXiv Detail & Related papers (2022-04-09T07:52:11Z) - Towards Optimal Correlational Object Search [25.355936023640506]
Correlational Object Search POMDP can be solved to produce search strategies that use correlational information.
We conduct experiments using AI2-THOR, a realistic simulator of household environments, as well as YOLOv5, a widely-used object detector.
arXiv Detail & Related papers (2021-10-19T14:03:43Z) - One-Shot Object Affordance Detection in the Wild [76.46484684007706]
Affordance detection refers to identifying the potential action possibilities of objects in an image.
We devise a One-Shot Affordance Detection Network (OSAD-Net) that estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images.
With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods.
arXiv Detail & Related papers (2021-08-08T14:53:10Z) - Diverse Knowledge Distillation for End-to-End Person Search [81.4926655119318]
Person search aims to localize and identify a specific person from a gallery of images.
Recent methods can be categorized into two groups, i.e., two-step and end-to-end approaches.
We propose a simple yet strong end-to-end network with diverse knowledge distillation to break the bottleneck.
arXiv Detail & Related papers (2020-12-21T09:04:27Z) - DRG: Dual Relation Graph for Human-Object Interaction Detection [65.50707710054141]
We tackle the challenging problem of human-object interaction (HOI) detection.
Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features.
In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph.
arXiv Detail & Related papers (2020-08-26T17:59:40Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.