Security and Safety Aspects of AI in Industry Applications
- URL: http://arxiv.org/abs/2207.10809v1
- Date: Sat, 16 Jul 2022 16:41:00 GMT
- Title: Security and Safety Aspects of AI in Industry Applications
- Authors: Hans Dermot Doran
- Abstract summary: We summarise issues in the domains of safety and security in machine learning that will affect industry sectors in the next five to ten years.
Reports of underlying problems in both safety and security related domains, for instance adversarial attacks have unsettled early adopters.
The problem for real-world applicability lies in being able to assess the risk of applying these technologies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this relatively informal discussion-paper we summarise issues in the
domains of safety and security in machine learning that will affect industry
sectors in the next five to ten years. Various products using neural network
classification, most often in vision related applications but also in
predictive maintenance, have been researched and applied in real-world
applications in recent years. Nevertheless, reports of underlying problems in
both safety and security related domains, for instance adversarial attacks have
unsettled early adopters and are threatening to hinder wider scale adoption of
this technology. The problem for real-world applicability lies in being able to
assess the risk of applying these technologies. In this discussion-paper we
describe the process of arriving at a machine-learnt neural network classifier
pointing out safety and security vulnerabilities in that workflow, citing
relevant research where appropriate.
Related papers
- Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Grand Challenges for Embedded Security Research in a Connected World [6.1916614285252]
The Computing Community Consortium (CCC) held a one-day visioning workshop to explore these issues.
Report synthesizes the results of that workshop and develops a list of strategic goals for research and education over the next 5-10 years.
arXiv Detail & Related papers (2020-05-13T21:01:57Z) - A Review of Computer Vision Methods in Network Security [11.380790116533912]
Network security has become an area of significant importance more than ever.
Traditional machine learning methods have been frequently used in the context of network security.
Recent years witnessed a phenomenal growth in computer vision mainly driven by the advances in the area of convolutional neural networks.
arXiv Detail & Related papers (2020-05-07T08:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.