FlashGuard: Novel Method in Evaluating Differential Characteristics of Visual Stimuli for Deterring Seizure Triggers in Photosensitive Epilepsy
- URL: http://arxiv.org/abs/2507.19692v1
- Date: Fri, 25 Jul 2025 22:18:25 GMT
- Title: FlashGuard: Novel Method in Evaluating Differential Characteristics of Visual Stimuli for Deterring Seizure Triggers in Photosensitive Epilepsy
- Authors: Ishan Pendyala,
- Abstract summary: Individuals with photosensitive epilepsy (PSE) encounter challenges when using devices.<n>The current norm for preventing epileptic flashes in media is to detect asynchronously when a flash will occur in a video, then notifying the user.<n>FlashGuard, a novel approach, was devised to assess the rate of change of colors in frames across the user's screen and appropriately mitigate stimuli.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the virtual realm, individuals with photosensitive epilepsy (PSE) encounter challenges when using devices, resulting in exposure to unpredictable seizure-causing visual stimuli. The current norm for preventing epileptic flashes in media is to detect asynchronously when a flash will occur in a video, then notifying the user. However, there is a lack of a real-time and computationally efficient solution for dealing with this issue. To address this issue and enhance accessibility for photosensitive viewers, FlashGuard, a novel approach, was devised to assess the rate of change of colors in frames across the user's screen and appropriately mitigate stimuli, based on perceptually aligned color space analysis in the CIELAB color space. The detection system is built on analyzing differences in color, and the mitigation system works by reducing luminance and smoothing color transitions. This study provides novel insight into how intrinsic color properties contribute to perceptual differences in flashing for PSE individuals, calling for the adoption of broadened WCAG guidelines to better account for risk. These insights and implementations pave the way for stronger protections for individuals with PSE from dangerous triggers in digital media, both in policy and in software.
Related papers
- Context-aware TFL: A Universal Context-aware Contrastive Learning Framework for Temporal Forgery Localization [60.73623588349311]
We propose a universal context-aware contrastive learning framework (UniCaCLF) for temporal forgery localization.<n>Our approach leverages supervised contrastive learning to discover and identify forged instants by means of anomaly detection.<n>An efficient context-aware contrastive coding is introduced to further push the limit of instant feature distinguishability between genuine and forged instants.
arXiv Detail & Related papers (2025-06-10T06:40:43Z) - Beyond Domain Randomization: Event-Inspired Perception for Visually Robust Adversarial Imitation from Videos [4.338232204525725]
Imitation from videos often fails when expert demonstrations and learner environments exhibit domain shifts.<n>We propose a different approach: instead of randomizing appearances, we eliminate their influence entirely by rethinking the sensory representation itself.<n>Our method converts standard RGB videos into a sparse, event-based representation that encodes temporal intensity gradients.
arXiv Detail & Related papers (2025-05-24T23:12:23Z) - WSCIF: A Weakly-Supervised Color Intelligence Framework for Tactical Anomaly Detection in Surveillance Keyframes [3.5516803380598074]
We propose a lightweight anomaly detection framework based on color features for surveillance video clips in a high sensitivity tactical mission.<n>The method fuses unsupervised KMeans clustering with RGB channel histogram modeling to achieve composite detection of structural anomalies and color mutation signals in key frames.<n>The results show that this method can be effectively used for tactical assassination warning, suspicious object screening and environmental drastic change monitoring with strong deployability and tactical interpretation value.
arXiv Detail & Related papers (2025-05-14T04:24:37Z) - Learning Physics-Informed Color-Aware Transforms for Low-Light Image Enhancement [5.8550460201927725]
We introduce a novel approach to low-light image enhancement based on decomposed physics-informed priors.<n>Existing methods that directly map low-light to normal-light images in the sRGB color space suffer from inconsistent color predictions.<n>Our proposed PiCat framework demonstrates superior performance compared to state-of-the-art methods across five benchmark datasets.
arXiv Detail & Related papers (2025-04-16T09:23:38Z) - elaTCSF: A Temporal Contrast Sensitivity Function for Flicker Detection and Modeling Variable Refresh Rate Flicker [0.6990493129893112]
Traditional approaches often rely on Critical Flicker Frequency (CFF), primarily suited for high-contrast (full-on, full-off) flicker.<n>We introduce a new spatial probability summation model to incorporate the effects of luminance, eccentricity, and area.<n>We demonstrate how elaTCSF can be used to predict flicker due to low-persistence in VR headsets, identify flicker-free VRR operational ranges, and determine flicker sensitivity in lighting design.
arXiv Detail & Related papers (2025-03-21T00:23:10Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - ColorVideoVDP: A visual difference predictor for image, video and display distortions [51.29162719944865]
metric is built on novel psychophysical models of chromatic contrast sensitivity and cross-channel contrast masking.
It accounts for the viewing conditions, geometric, and photometric characteristics of the display.
It was trained to predict common video streaming distortions and 8 new distortion types related to AR/VR displays.
arXiv Detail & Related papers (2024-01-21T13:16:33Z) - Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
Cross-modality person re-identification (ReID) systems are based on RGB images.<n>Main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.<n>Existing attack methods have primarily focused on the characteristics of the visible image modality.<n>This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.<n>We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.<n>Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - DFCANet: Dense Feature Calibration-Attention Guided Network for Cross
Domain Iris Presentation Attack Detection [2.95102708174421]
iris presentation attack detection (IPAD) is essential for securing personal identity.
Existing IPAD algorithms do not generalize well to unseen and cross-domain scenarios.
This paper proposes DFCANet: Dense Feature and Attention Guided Network.
arXiv Detail & Related papers (2021-11-01T13:04:23Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.