AI, insurance, discrimination and unfair differentiation. An overview and research agenda
- URL: http://arxiv.org/abs/2401.11892v3
- Date: Wed, 30 Oct 2024 14:57:36 GMT
- Title: AI, insurance, discrimination and unfair differentiation. An overview and research agenda
- Authors: Marvin van Bekkum, Frederik Zuiderveen Borgesius, Tom Heskes,
- Abstract summary: Insurers seem captivated by two trends enabled by Artificial Intelligence (AI)
First, insurers could use AI for analysing more and new types of data to assess risks more precisely: data-intensive underwriting.
Second, insurers could use AI to monitor the behaviour of individual consumers in real-time: behaviour-based insurance.
While the two trends bring many advantages, they may also have discriminatory effects on society.
- Score: 0.6144680854063939
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Insurers underwrite risks: they calculate risks and decide on the insurance premium. Insurers seem captivated by two trends enabled by Artificial Intelligence (AI). (i) First, insurers could use AI for analysing more and new types of data to assess risks more precisely: data-intensive underwriting. (ii) Second, insurers could use AI to monitor the behaviour of individual consumers in real-time: behaviour-based insurance. For example, some car insurers offer a discount if the consumer agrees to being tracked by the insurer and drives safely. While the two trends bring many advantages, they may also have discriminatory effects on society. This paper focuses on the following question. Which effects related to discrimination and unfair differentiation may occur if insurers follow data-intensive underwriting and behaviour-based insurance? Researchers and policymakers working in other sectors may also find the paper useful, as the insurance sector has decades of experience with statistics and forms of AI. Moreover, some questions that arise in the insurance sector are important in other sectors too.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Discrimination and AI in insurance: what do people find fair? Results from a survey [0.0]
Two modern trends in insurance are data-intensive underwriting and behavior-based insurance.
Survey respondents find almost all modern insurance practices that we described unfair.
We reflect on the policy implications of the findings.
arXiv Detail & Related papers (2025-01-22T14:18:47Z) - Quantifying detection rates for dangerous capabilities: a theoretical model of dangerous capability evaluations [47.698233647783965]
We present a quantitative model for tracking dangerous AI capabilities over time.
Our goal is to help the policy and research community visualise how dangerous capability testing can give us an early warning about approaching AI risks.
arXiv Detail & Related papers (2024-12-19T22:31:34Z) - Classification problem in liability insurance using machine learning models: a comparative study [0.0]
We apply several machine learning models to classify liability insurance policies into two groups: 1 - policies with claims and 2 - policies without claims.
In this study, we apply several machine learning models such as nearest neighbour and logistic regression to the Actuarial Challenge dataset used by Qazvini.
arXiv Detail & Related papers (2024-11-01T04:35:39Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Harnessing GPT-4V(ision) for Insurance: A Preliminary Exploration [51.36387171207314]
Insurance involves a wide variety of data forms in its operational processes, including text, images, and videos.
GPT-4V exhibits remarkable abilities in insurance-related tasks, demonstrating a robust understanding of multimodal content.
However, GPT-4V struggles with detailed risk rating and loss assessment, suffers from hallucination in image understanding, and shows variable support for different languages.
arXiv Detail & Related papers (2024-04-15T11:45:30Z) - Evaluating if trust and personal information privacy concerns are
barriers to using health insurance that explicitly utilizes AI [0.6138671548064355]
This research explores whether trust and privacy concern are barriers to the adoption of AI in health insurance.
Findings show that trust is significantly lower in the second scenario where AI is visible.
Privacy concerns are higher with AI but the difference is not statistically significant within the model.
arXiv Detail & Related papers (2024-01-20T15:02:56Z) - Can Telematics Improve Driving Style? The Use of Behavioural Data in
Motor Insurance [0.1398098625978622]
This paper explores the related opportunities and challenges analysing the use of telematics data in third-party liability motor insurance.
Behaviour data are used not only to refine the risk profile of policyholders, but also to implement innovative coaching strategies.
Our research explores the effectiveness of coaching on the basis of an empirical investigation of the dataset of a company selling telematics motor insurance policies.
arXiv Detail & Related papers (2023-09-06T08:00:51Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling [0.0]
Driven by the growing attention of regulators on the ethical use of data in insurance, the actuarial community must rethink pricing and risk selection practices.
Equity is a philosophy concept that has many different definitions in every jurisdiction that influence each other without currently reaching consensus.
We propose an innovative method, not yet met in the literature, to reduce the risks of indirect discrimination thanks to mathematical concepts of linear algebra.
arXiv Detail & Related papers (2023-07-25T16:20:56Z) - AI Liability Insurance With an Example in AI-Powered E-diagnosis System [22.102728605081534]
We use an AI-powered E-diagnosis system as an example to study AI liability insurance.
We show that AI liability insurance can act as a regulatory mechanism to incentivize compliant behaviors and serve as a certificate of high-quality AI systems.
arXiv Detail & Related papers (2023-06-01T21:03:47Z) - Adversarial AI in Insurance: Pervasiveness and Resilience [0.0]
We study Adversarial Attacks, which consist of the creation of modified input data to deceive an AI system and produce false outputs.
We argue on defence methods and precautionary systems, considering that they can involve few-shot and zero-shot multilabelling.
A related topic, with growing interest, is the validation and verification of systems incorporating AI and ML components.
arXiv Detail & Related papers (2023-01-17T08:49:54Z) - A Data Science Approach to Risk Assessment for Automobile Insurance
Policies [1.0660480034605242]
We focus on risk assessment using a Data Science approach.
We predict the total claims that will be made by a new customer using historical data of current and past policies.
arXiv Detail & Related papers (2022-09-06T18:32:27Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference [19.71459945458985]
We propose a driver-centric definition of risk, i.e., objects influencing drivers' behavior are risky.
We present a novel two-stage risk object identification framework based on causal inference with the proposed object-level manipulable driving model.
Our framework achieves a substantial average performance boost over a strong baseline by 7.5%.
arXiv Detail & Related papers (2020-03-05T04:14:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.