1000 Pupil Segmentations in a Second using Haar Like Features and
Statistical Learning
- URL: http://arxiv.org/abs/2102.01921v1
- Date: Wed, 3 Feb 2021 07:45:04 GMT
- Title: 1000 Pupil Segmentations in a Second using Haar Like Features and
Statistical Learning
- Authors: Wolfgang Fuhl
- Abstract summary: We present a new approach for pupil segmentation. It can be computed and trained very efficiently.
The approach is inspired by the BORE and CBF algorithms and generalizes the binary comparison by Haar features.
- Score: 3.962145079528281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a new approach for pupil segmentation. It can be
computed and trained very efficiently, making it ideal for online use for high
speed eye trackers as well as for energy saving pupil detection in mobile eye
tracking. The approach is inspired by the BORE and CBF algorithms and
generalizes the binary comparison by Haar features. Since these features are
intrinsically very susceptible to noise and fluctuating light conditions, we
combine them with conditional pupil shape probabilities. In addition, we also
rank each feature according to its importance in determining the pupil shape.
Another advantage of our method is the use of statistical learning, which is
very efficient and can even be used online.
https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/?p=%2FStatsPupil&mode=list
Related papers
- EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing [2.9795443606634917]
EyeTrAES is a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement.
We show that EyeTrAES boosts pupil tracking fidelity by 6+%, achieving IoU=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives.
For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics.
arXiv Detail & Related papers (2024-09-27T15:06:05Z) - When Does Visual Prompting Outperform Linear Probing for Vision-Language Models? A Likelihood Perspective [57.05315507519704]
We propose a log-likelihood ratio (LLR) approach to analyze the comparative benefits of visual prompting and linear probing.
Our measure attains up to a 100-fold reduction in run time compared to full training, while achieving prediction accuracies up to 91%.
arXiv Detail & Related papers (2024-09-03T12:03:45Z) - BEBLID: Boosted efficient binary local image descriptor [2.8538628855541397]
We introduce BEBLID, an efficient learned binary image descriptor.
It improves our previous real-valued descriptor, BELID, making it both more efficient for matching and more accurate.
In experiments BEBLID achieves an accuracy close to SIFT and better computational efficiency than ORB, the fastest algorithm in the literature.
arXiv Detail & Related papers (2024-02-07T00:14:32Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - HappyFeat -- An interactive and efficient BCI framework for clinical
applications [1.0695468735073714]
We present HappyFeat, a software making Motor Imagery (MI) based BCI experiments easier.
The resulting workflow allows for effortlessly selecting the best features, helping to achieve good BCI performance.
HappyFeat is available as an open-source project which can be freely downloaded on GitHub.
arXiv Detail & Related papers (2023-10-04T16:36:32Z) - Improving Knowledge Distillation via Regularizing Feature Norm and
Direction [16.98806338782858]
Knowledge distillation (KD) exploits a large well-trained model (i.e., teacher) to train a small student model on the same dataset for the same task.
Treating teacher features as knowledge, prevailing methods of knowledge distillation train student by aligning its features with the teacher's, e.g., by minimizing the KL-divergence between their logits or L2 distance between their intermediate features.
While it is natural to believe that better alignment of student features to the teacher better distills teacher knowledge, simply forcing this alignment does not directly contribute to the student's performance, e.g.
arXiv Detail & Related papers (2023-05-26T15:05:19Z) - A temporally quantized distribution of pupil diameters as a new feature
for cognitive load classification [1.4469849628263638]
We present a new feature that can be used to classify cognitive load based on pupil information.
The applications of determining Cognitive Load from pupil data are numerous and could lead to pre-warning systems for burnouts.
arXiv Detail & Related papers (2023-03-03T07:52:16Z) - Kinship Verification Based on Cross-Generation Feature Interaction
Learning [53.62256887837659]
Kinship verification from facial images has been recognized as an emerging yet challenging technique in computer vision applications.
We propose a novel cross-generation feature interaction learning (CFIL) framework for robust kinship verification.
arXiv Detail & Related papers (2021-09-07T01:50:50Z) - Natural Statistics of Network Activations and Implications for Knowledge
Distillation [95.15239893744791]
We study the natural statistics of the deep neural network activations at various layers.
We show, both analytically and empirically, that with depth the exponent of this power law increases at a linear rate.
We present a method for performing Knowledge Distillation (KD)
arXiv Detail & Related papers (2021-06-01T10:18:30Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Learning Invariant Representations for Reinforcement Learning without
Reconstruction [98.33235415273562]
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.
Bisimulation metrics quantify behavioral similarity between states in continuous MDPs.
We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks.
arXiv Detail & Related papers (2020-06-18T17:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.