Artificial intelligence and cybersecurity in banking sector: opportunities and risks
- URL: http://arxiv.org/abs/2412.04495v1
- Date: Thu, 28 Nov 2024 22:09:55 GMT
- Title: Artificial intelligence and cybersecurity in banking sector: opportunities and risks
- Authors: Ana Kovacevic, Sonja D. Radenkovic, Dragana Nikolic,
- Abstract summary: Machine learning (ML) enables systems to adapt and learn from vast datasets.
This study highlights the dual-use nature of AI tools, which can be used by malicious users.
The paper emphasizes the importance of developing machine learning models with key characteristics such as security, trust, resilience and robustness.
- Score: 0.0
- License:
- Abstract: The rapid advancements in artificial intelligence (AI) have presented new opportunities for enhancing efficiency and economic competitiveness across various industries, espcially in banking. Machine learning (ML), as a subset of artificial intelligence, enables systems to adapt and learn from vast datasets, revolutionizing decision-making processes, fraud detection, and customer service automation. However, these innovations also introduce new challenges, particularly in the realm of cybersecurity. Adversarial attacks, such as data poisoning and evasion attacks, represent critical threats to machine learning models, exploiting vulnerabilities to manipulate outcomes or compromise sensitive information. Furthermore, this study highlights the dual-use nature of AI tools, which can be used by malicious users. To address these challenges, the paper emphasizes the importance of developing machine learning models with key characteristics such as security, trust, resilience and robustness. These features are essential to mitigating risks and ensuring the secure deployment of AI technologies in banking sectors, where the protection of financial data is paramount. The findings underscore the urgent need for enhanced cybersecurity frameworks and continuous improvements in defensive mechanisms. By exploring both opportunities and risks, this paper aims to guide the responsible integration of AI in the banking sector, paving the way for innovation while safeguarding against emerging threats.
Related papers
- Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Considerations Influencing Offense-Defense Dynamics From Artificial Intelligence [0.0]
AI can enhance defensive capabilities but also presents avenues for malicious exploitation and large-scale societal harm.
This paper proposes a taxonomy to map and examine the key factors that influence whether AI systems predominantly pose threats or offer protective benefits to society.
arXiv Detail & Related papers (2024-12-05T10:05:53Z) - Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems [0.22499166814992438]
We propose a novel pipeline for online supervised learning problems in cyber-security.
Our approach aims to balance predictive performance with transparency.
This work contributes to the growing field of interpretable AI.
arXiv Detail & Related papers (2024-11-14T12:11:08Z) - Security of and by Generative AI platforms [0.0]
This whitepaper highlights the dual importance of securing generative AI (genAI) platforms and leveraging genAI for cybersecurity.
As genAI technologies proliferate, their misuse poses significant risks, including data breaches, model tampering, and malicious content generation.
The whitepaper explores strategies for robust security frameworks around genAI systems, while also showcasing how genAI can empower organizations to anticipate, detect, and mitigate sophisticated cyber threats.
arXiv Detail & Related papers (2024-10-15T15:27:05Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Generative AI in Cybersecurity [0.0]
Generative Artificial Intelligence (GAI) has been pivotal in reshaping the field of data analysis, pattern recognition, and decision-making processes.
As GAI rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks.
The study highlights the critical need for organizations to proactively identify and develop more complex defensive strategies to counter the sophisticated employment of GAI in malware creation.
arXiv Detail & Related papers (2024-05-02T19:03:11Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof [1.7510020208193926]
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
arXiv Detail & Related papers (2022-11-03T13:55:36Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.