A System for Accurate Tracking and Video Recordings of Rodent Eye Movements using Convolutional Neural Networks for Biomedical Image Segmentation
- URL: http://arxiv.org/abs/2506.08183v1
- Date: Mon, 09 Jun 2025 19:48:32 GMT
- Title: A System for Accurate Tracking and Video Recordings of Rodent Eye Movements using Convolutional Neural Networks for Biomedical Image Segmentation
- Authors: Isha Puri, David Cox,
- Abstract summary: We present a flexible, robust, and highly accurate model for pupil and corneal reflection identification in rodent gaze determination.<n>This is the first paper that demonstrates a highly accurate and practical biomedical image segmentation based convolutional neural network architecture.
- Score: 1.961086321336988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research in neuroscience and vision science relies heavily on careful measurements of animal subject's gaze direction. Rodents are the most widely studied animal subjects for such research because of their economic advantage and hardiness. Recently, video based eye trackers that use image processing techniques have become a popular option for gaze tracking because they are easy to use and are completely noninvasive. Although significant progress has been made in improving the accuracy and robustness of eye tracking algorithms, unfortunately, almost all of the techniques have focused on human eyes, which does not account for the unique characteristics of the rodent eye images, e.g., variability in eye parameters, abundance of surrounding hair, and their small size. To overcome these unique challenges, this work presents a flexible, robust, and highly accurate model for pupil and corneal reflection identification in rodent gaze determination that can be incrementally trained to account for variability in eye parameters encountered in the field. To the best of our knowledge, this is the first paper that demonstrates a highly accurate and practical biomedical image segmentation based convolutional neural network architecture for pupil and corneal reflection identification in eye images. This new method, in conjunction with our automated infrared videobased eye recording system, offers the state of the art technology in eye tracking for neuroscience and vision science research for rodents.
Related papers
- The Eye as a Window to Systemic Health: A Survey of Retinal Imaging from Classical Techniques to Oculomics [14.998873360919879]
The retinal structure assists in assessing the early detection, monitoring of disease progression and intervention for both ocular and non-ocular diseases.<n>The advancement in imaging technology leveraging Artificial Intelligence has seized this opportunity to bridge the gap between the eye and human health.<n>The new frontiers of oculomics in ophthalmology cover both ocular and systemic diseases, and getting more attention to explore them.
arXiv Detail & Related papers (2025-05-06T22:35:54Z) - A Framework for Pupil Tracking with Event Cameras [1.708806485130162]
Saccades are extremely rapid movements of both eyes that occur simultaneously.
The peak angular speed of the eye during a saccade can reach as high as 700deg/s in humans.
We present events as frames that can be readily utilized by standard deep learning algorithms.
arXiv Detail & Related papers (2024-07-23T17:32:02Z) - Computer Vision for Primate Behavior Analysis in the Wild [61.08941894580172]
Video-based behavioral monitoring has great potential for transforming how we study animal cognition and behavior.
There is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today.
arXiv Detail & Related papers (2024-01-29T18:59:56Z) - Periocular biometrics: databases, algorithms and directions [69.35569554213679]
Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions.
This paper presents a review of the state of the art in periocular biometric research.
arXiv Detail & Related papers (2023-07-26T11:14:36Z) - GazeGNN: A Gaze-Guided Graph Neural Network for Chest X-ray
Classification [9.266556662553345]
We propose a novel gaze-guided graph neural network (GNN), GazeGNN, to leverage raw eye-gaze data without being converted into visual attention maps (VAMs)
We develop a real-time, real-world, end-to-end disease classification algorithm for the first time in the literature.
arXiv Detail & Related papers (2023-05-29T17:01:54Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for
Ophthalmic Images [18.186766129476077]
We propose an artifact-tolerant unsupervised learning framework termed EyeLearn for learning representations of ophthalmic images.
EyeLearn has an artifact correction module to learn representations that can best predict artifact-free ophthalmic images.
To evaluate EyeLearn, we use the learned representations for visual field prediction and glaucoma detection using a real-world ophthalmic image dataset of glaucoma patients.
arXiv Detail & Related papers (2022-09-02T01:25:45Z) - MTCD: Cataract Detection via Near Infrared Eye Images [69.62768493464053]
cataract is a common eye disease and one of the leading causes of blindness and vision impairment.
We present a novel algorithm for cataract detection using near-infrared eye images.
Deep learning-based eye segmentation and multitask network classification networks are presented.
arXiv Detail & Related papers (2021-10-06T08:10:28Z) - Data augmentation and image understanding [2.123756175601459]
dissertation explores some advantageous synergies between machine learning, cognitive science and neuroscience.
dissertation focuses on learning representations that are more aligned with visual perception and the biological vision.
arXiv Detail & Related papers (2020-12-28T11:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.