CAHICHA: Computer Automated Hardware Interaction test to tell Computer and Humans Apart
- URL: http://arxiv.org/abs/2511.07841v1
- Date: Wed, 12 Nov 2025 01:23:42 GMT
- Title: CAHICHA: Computer Automated Hardware Interaction test to tell Computer and Humans Apart
- Authors: Aditya Mitra, Sibi Chakkaravarthy Sethuraman, Devi Priya V S,
- Abstract summary: Bots and scrapers with Artificial Intelligence (AI) capabilities can now detect and solve visual challenges, emulate human like typing patterns, and avoid most security tests.<n>This leaves a vital gap in identifying real human users versus advanced bots.<n>We present a novel technique for distinguishing real human users based on hardware interaction signals.
- Score: 0.16385815610837165
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As automation bot technology and Artificial Intelligence is evolving rapidly, conventional human verification techniques like voice CAPTCHAs and knowledge-based authentication are becoming less effective. Bots and scrapers with Artificial Intelligence (AI) capabilities can now detect and solve visual challenges, emulate human like typing patterns, and avoid most security tests, leading to high-volume threats like credential stuffing, account abuse, ad fraud, and automated scalping. This leaves a vital gap in identifying real human users versus advanced bots. We present a novel technique for distinguishing real human users based on hardware interaction signals to address this issue. In contrast to conventional approaches, our method leverages human interactions and a cryptographically attested User Presence (UP) flag from trusted hardware to verify genuine physical user engagement providing a secure and reliable way to distinguish authentic users from automated bots or scripted routines. The suggested approach was thoroughly assessed in terms of performance, usability, and security. The system demonstrated consistent throughput and zero request failures under prolonged concurrent user demand, indicating good operational reliability, efficient load handling, and the underlying architecture's robustness. These thorough analyses support the conclusion that the suggested system provides a safer, more effective, and easier-to-use substitute for current human verification methods.
Related papers
- Spatial CAPTCHA: Generatively Benchmarking Spatial Reasoning for Human-Machine Differentiation [15.668734718800065]
We present a novel human-verification framework that leverages fundamental differences in spatial reasoning between humans and MLLMs.<n>Unlike existing CAPTCHAs which rely on low-level perception tasks that are vulnerable to modern AI, Spatial CAPTCHA generates dynamic questions requiring geometric reasoning, perspective-taking, and mental rotation.<n> Evaluation on a corresponding benchmark, Spatial-CAPTCHA-Bench, demonstrates that humans vastly outperform 10 state-of-the-art MLLMs, with the best model achieving only 31.0% Pass@1 accuracy.
arXiv Detail & Related papers (2025-10-04T16:19:21Z) - A Hybrid CAPTCHA Combining Generative AI with Keystroke Dynamics for Enhanced Bot Detection [0.0]
This paper introduces a novel hybrid CAPTCHA system that synergizes the cognitive challenges posed by Large Language Models (LLMs) with the behavioral biometric analysis of keystroke dynamics.<n>Our approach generates dynamic, unpredictable questions that are trivial for humans but non-trivial for automated agents, while simultaneously analyzing the user's typing rhythm to distinguish human patterns from robotic input.
arXiv Detail & Related papers (2025-09-29T17:56:13Z) - Deep Learning Models for Robust Facial Liveness Detection [56.08694048252482]
This study introduces a robust solution through novel deep learning models addressing the deficiencies in contemporary anti-spoofing techniques.<n>By innovatively integrating texture analysis and reflective properties associated with genuine human traits, our models distinguish authentic presence from replicas with remarkable precision.
arXiv Detail & Related papers (2025-08-12T17:19:20Z) - Automatically Detecting Online Deceptive Patterns [24.018376492278033]
Deceptive patterns in digital interfaces manipulate users into making unintended decisions, exploiting cognitive biases and psychological vulnerabilities.<n>We introduce our AutoBot framework to address this gap and help web stakeholders navigate and mitigate online deceptive patterns.<n>AutoBot accurately identifies and localizes deceptive patterns from a screenshot of a website without relying on the underlying HTML code.
arXiv Detail & Related papers (2024-11-11T23:49:02Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Malware Detection and Prevention using Artificial Intelligence
Techniques [7.583480439784955]
Security has become a major issue due to the increase in malware activity.
In this study, we emphasize Artificial Intelligence (AI) based techniques for detecting and preventing malware activity.
arXiv Detail & Related papers (2022-06-26T02:41:46Z) - Certifiable Artificial Intelligence Through Data Fusion [7.103626867766158]
This paper reviews and proposes concerns in adopting, fielding, and maintaining artificial intelligence (AI) systems.
A notional use case is presented with image data fusion to support AI object recognition certifiability considering precision versus distance.
arXiv Detail & Related papers (2021-11-03T03:34:19Z) - Robust Text CAPTCHAs Using Adversarial Examples [129.29523847765952]
We propose a user-friendly text-based CAPTCHA generation method named Robust Text CAPTCHA (RTC)
At the first stage, the foregrounds and backgrounds are constructed with randomly sampled font and background images.
At the second stage, we apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
arXiv Detail & Related papers (2021-01-07T11:03:07Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.