Evaluation of Real-World Risk-Based Authentication at Online Services Revisited: Complexity Wins
- URL: http://arxiv.org/abs/2308.15156v1
- Date: Tue, 29 Aug 2023 09:37:14 GMT
- Title: Evaluation of Real-World Risk-Based Authentication at Online Services Revisited: Complexity Wins
- Authors: Jan-Phillip Makowski, Daniela Pöhn,
- Abstract summary: Risk-based authentication (RBA) aims to protect end-users against attacks involving stolen or otherwise guessed passwords.
RBA monitors different features, such as geolocation and device during login.
Only a few online services publish information about how their systems work.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Risk-based authentication (RBA) aims to protect end-users against attacks involving stolen or otherwise guessed passwords without requiring a second authentication method all the time. Online services typically set limits on what is still seen as normal and what is not, as well as the actions taken afterward. Consequently, RBA monitors different features, such as geolocation and device during login. If the features' values differ from the expected values, then a second authentication method might be requested. However, only a few online services publish information about how their systems work. This hinders not only RBA research but also its development and adoption in organizations. In order to understand how the RBA systems online services operate, black box testing is applied. To verify the results, we re-evaluate the three large providers: Google, Amazon, and Facebook. Based on our test setup and the test cases, we notice differences in RBA based on account creation at Google. Additionally, several test cases rarely trigger the RBA system. Our results provide new insights into RBA systems and raise several questions for future work.
Related papers
- Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning [90.23629291067763]
A promising approach for improving reasoning in large language models is to use process reward models (PRMs)
PRMs provide feedback at each step of a multi-step reasoning trace, potentially improving credit assignment over outcome reward models (ORMs)
To improve a base policy by running search against a PRM or using it as dense rewards for reinforcement learning (RL), we ask: "How should we design process rewards?"
We theoretically characterize the set of good provers and our results show that optimizing process rewards from such provers improves exploration during test-time search and online RL.
arXiv Detail & Related papers (2024-10-10T17:31:23Z) - OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs [64.25176233153657]
OpenFactCheck is an open-sourced fact-checking framework for large language models.
It allows users to easily customize an automatic fact-checking system.
It also assesses the factuality of all claims in an input document using that system.
arXiv Detail & Related papers (2024-08-06T15:49:58Z) - Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track [51.25144287084172]
It is crucial to have an arena to build, test, visualize, and systematically evaluate RAG-based search systems.
We propose the TREC 2024 RAG Track to foster innovation in evaluating RAG systems.
arXiv Detail & Related papers (2024-06-24T17:37:52Z) - Evaluating the Influence of Multi-Factor Authentication and Recovery Settings on the Security and Accessibility of User Accounts [0.0]
This paper presents a study on the account settings of Google and Apple users.
Considering the multi-factor authentication configuration and recovery options, we analyzed the account security and lock-out risks.
Our results provide insights into the usage of multi-factor authentication in practice, show significant security differences between Google and Apple accounts, and reveal that many users would miss access to their accounts when losing a single authentication device.
arXiv Detail & Related papers (2024-03-22T10:05:37Z) - Is It Really You Who Forgot the Password? When Account Recovery Meets Risk-Based Authentication [1.776750337181166]
Risk-based authentication (RBA) is used to protect user accounts from unauthorized takeover.
Recent attacks have revealed vulnerabilities in other parts of the authentication process, specifically in the account recovery function.
This paper presents the first study to investigate risk-based account recovery (RBAR) in the wild.
arXiv Detail & Related papers (2024-03-18T13:55:24Z) - Retrieval Augmented Generation Systems: Automatic Dataset Creation,
Evaluation and Boolean Agent Setup [5.464952345664292]
Retrieval Augmented Generation (RAG) systems have seen huge popularity in augmenting Large-Language Model (LLM) outputs with domain specific and time sensitive data.
In this paper we present a rigorous dataset creation and evaluation workflow to quantitatively compare different RAG strategies.
arXiv Detail & Related papers (2024-02-26T12:56:17Z) - Exploring Behaviours of RESTful APIs in an Industrial Setting [0.43012765978447565]
We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
arXiv Detail & Related papers (2023-10-26T11:33:11Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - URLB: Unsupervised Reinforcement Learning Benchmark [82.36060735454647]
We introduce the Unsupervised Reinforcement Learning Benchmark (URLB)
URLB consists of two phases: reward-free pre-training and downstream task adaptation with extrinsic rewards.
We provide twelve continuous control tasks from three domains for evaluation and open-source code for eight leading unsupervised RL methods.
arXiv Detail & Related papers (2021-10-28T15:07:01Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.