AI, insurance, discrimination and unfair differentiation. An overview and research agenda
- URL: http://arxiv.org/abs/2401.11892v4
- Date: Fri, 31 Jan 2025 15:52:00 GMT
- Title: AI, insurance, discrimination and unfair differentiation. An overview and research agenda
- Authors: Marvin van Bekkum, Frederik Zuiderveen Borgesius, Tom Heskes,
- Abstract summary: Insurers seem captivated by two trends enabled by Artificial Intelligence (AI)
Insurers could use AI for analysing more and new types of data to assess risks more precisely: data-intensive underwriting.
Insurers could also use AI to monitor the behaviour of individual consumers in real-time: behaviour-based insurance.
While the two trends bring many advantages, they may also have discriminatory effects on society.
- Score: 0.6144680854063939
- License:
- Abstract: Insurers underwrite risks: they calculate risks and decide on the insurance price. Insurers seem captivated by two trends enabled by Artificial Intelligence (AI). (i) First, insurers could use AI for analysing more and new types of data to assess risks more precisely: data-intensive underwriting. (ii) Second, insurers could use AI to monitor the behaviour of individual consumers in real-time: behaviour-based insurance. For example, some car insurers offer a discount if the consumer agrees to being tracked by the insurer and drives safely. While the two trends bring many advantages, they may also have discriminatory effects on society. This paper focuses on the following question. Which effects related to discrimination and unfair differentiation may occur if insurers use data-intensive underwriting and behaviour-based insurance?
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Discrimination and AI in insurance: what do people find fair? Results from a survey [0.0]
Two modern trends in insurance are data-intensive underwriting and behavior-based insurance.
Survey respondents find almost all modern insurance practices that we described unfair.
We reflect on the policy implications of the findings.
arXiv Detail & Related papers (2025-01-22T14:18:47Z) - Quantifying detection rates for dangerous capabilities: a theoretical model of dangerous capability evaluations [47.698233647783965]
We present a quantitative model for tracking dangerous AI capabilities over time.
Our goal is to help the policy and research community visualise how dangerous capability testing can give us an early warning about approaching AI risks.
arXiv Detail & Related papers (2024-12-19T22:31:34Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Harnessing GPT-4V(ision) for Insurance: A Preliminary Exploration [51.36387171207314]
Insurance involves a wide variety of data forms in its operational processes, including text, images, and videos.
GPT-4V exhibits remarkable abilities in insurance-related tasks, demonstrating a robust understanding of multimodal content.
However, GPT-4V struggles with detailed risk rating and loss assessment, suffers from hallucination in image understanding, and shows variable support for different languages.
arXiv Detail & Related papers (2024-04-15T11:45:30Z) - Evaluating if trust and personal information privacy concerns are
barriers to using health insurance that explicitly utilizes AI [0.6138671548064355]
This research explores whether trust and privacy concern are barriers to the adoption of AI in health insurance.
Findings show that trust is significantly lower in the second scenario where AI is visible.
Privacy concerns are higher with AI but the difference is not statistically significant within the model.
arXiv Detail & Related papers (2024-01-20T15:02:56Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling [0.0]
Driven by the growing attention of regulators on the ethical use of data in insurance, the actuarial community must rethink pricing and risk selection practices.
Equity is a philosophy concept that has many different definitions in every jurisdiction that influence each other without currently reaching consensus.
We propose an innovative method, not yet met in the literature, to reduce the risks of indirect discrimination thanks to mathematical concepts of linear algebra.
arXiv Detail & Related papers (2023-07-25T16:20:56Z) - A Data Science Approach to Risk Assessment for Automobile Insurance
Policies [1.0660480034605242]
We focus on risk assessment using a Data Science approach.
We predict the total claims that will be made by a new customer using historical data of current and past policies.
arXiv Detail & Related papers (2022-09-06T18:32:27Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference [19.71459945458985]
We propose a driver-centric definition of risk, i.e., objects influencing drivers' behavior are risky.
We present a novel two-stage risk object identification framework based on causal inference with the proposed object-level manipulable driving model.
Our framework achieves a substantial average performance boost over a strong baseline by 7.5%.
arXiv Detail & Related papers (2020-03-05T04:14:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.