Adversarial AI in Insurance: Pervasiveness and Resilience
- URL: http://arxiv.org/abs/2301.07520v1
- Date: Tue, 17 Jan 2023 08:49:54 GMT
- Title: Adversarial AI in Insurance: Pervasiveness and Resilience
- Authors: Elisa Luciano and Matteo Cattaneo and Ron Kenett
- Abstract summary: We study Adversarial Attacks, which consist of the creation of modified input data to deceive an AI system and produce false outputs.
We argue on defence methods and precautionary systems, considering that they can involve few-shot and zero-shot multilabelling.
A related topic, with growing interest, is the validation and verification of systems incorporating AI and ML components.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid and dynamic pace of Artificial Intelligence (AI) and Machine
Learning (ML) is revolutionizing the insurance sector. AI offers significant,
very much welcome advantages to insurance companies, and is fundamental to
their customer-centricity strategy. It also poses challenges, in the project
and implementation phase. Among those, we study Adversarial Attacks, which
consist of the creation of modified input data to deceive an AI system and
produce false outputs. We provide examples of attacks on insurance AI
applications, categorize them, and argue on defence methods and precautionary
systems, considering that they can involve few-shot and zero-shot
multilabelling. A related topic, with growing interest, is the validation and
verification of systems incorporating AI and ML components. These topics are
discussed in various sections of this paper.
Related papers
- Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - The AI Security Pyramid of Pain [0.18820558426635298]
We introduce the AI Security Pyramid of Pain, a framework that adapts the cybersecurity Pyramid of Pain to categorize and prioritize AI-specific threats.
This framework provides a structured approach to understanding and addressing various levels of AI threats.
arXiv Detail & Related papers (2024-02-16T21:14:11Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Liability Insurance With an Example in AI-Powered E-diagnosis System [22.102728605081534]
We use an AI-powered E-diagnosis system as an example to study AI liability insurance.
We show that AI liability insurance can act as a regulatory mechanism to incentivize compliant behaviors and serve as a certificate of high-quality AI systems.
arXiv Detail & Related papers (2023-06-01T21:03:47Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML
Systems [2.5137859989323528]
Software systems are increasingly relying on Artificial Intelligence (AI) and Machine Learning (ML) components.
This paper presents a framework to characterize attacks and weaknesses associated with AI-enabled systems.
arXiv Detail & Related papers (2022-02-18T22:54:04Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Vulnerabilities of Connectionist AI Applications: Evaluation and Defence [0.0]
This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
arXiv Detail & Related papers (2020-03-18T12:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.