Explainable Artificial Intelligence for Smart City Application: A Secure
and Trusted Platform
- URL: http://arxiv.org/abs/2111.00601v1
- Date: Sun, 31 Oct 2021 21:47:12 GMT
- Title: Explainable Artificial Intelligence for Smart City Application: A Secure
and Trusted Platform
- Authors: M. Humayn Kabir, Khondokar Fida Hasan, Mohammad Kamrul Hasan, Keyvan
Ansari
- Abstract summary: It is acknowledged that the loss of control over interpretability of decision-making becomes a critical issue for many data-driven automated applications.
This chapter conducts a comprehensive study of machine learning applications in cybersecurity to indicate the need for explainability to address this question.
Considering the new technological paradigm, Explainable Artificial Intelligence (XAI), this chapter discusses the transition from black-box to white-box.
- Score: 6.1938383008964495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) is one of the disruptive technologies that is
shaping the future. It has growing applications for data-driven decisions in
major smart city solutions, including transportation, education, healthcare,
public governance, and power systems. At the same time, it is gaining
popularity in protecting critical cyber infrastructure from cyber threats,
attacks, damages, or unauthorized access. However, one of the significant
issues of those traditional AI technologies (e.g., deep learning) is that the
rapid progress in complexity and sophistication propelled and turned out to be
uninterpretable black boxes. On many occasions, it is very challenging to
understand the decision and bias to control and trust systems' unexpected or
seemingly unpredictable outputs. It is acknowledged that the loss of control
over interpretability of decision-making becomes a critical issue for many
data-driven automated applications. But how may it affect the system's security
and trustworthiness? This chapter conducts a comprehensive study of machine
learning applications in cybersecurity to indicate the need for explainability
to address this question. While doing that, this chapter first discusses the
black-box problems of AI technologies for Cybersecurity applications in smart
city-based solutions. Later, considering the new technological paradigm,
Explainable Artificial Intelligence (XAI), this chapter discusses the
transition from black-box to white-box. This chapter also discusses the
transition requirements concerning the interpretability, transparency,
understandability, and Explainability of AI-based technologies in applying
different autonomous systems in smart cities. Finally, it has presented some
commercial XAI platforms that offer explainability over traditional AI
technologies before presenting future challenges and opportunities.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - A Survey on Explainable Artificial Intelligence for Cybersecurity [14.648580959079787]
Explainable Artificial Intelligence (XAI) aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions.
In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats.
arXiv Detail & Related papers (2023-03-07T22:54:18Z) - Explainable Artificial Intelligence and Cybersecurity: A Systematic
Literature Review [0.799536002595393]
XAI aims to make the operation of AI algorithms more interpretable for its users and developers.
This work seeks to investigate the current research scenario on XAI applied to cybersecurity.
arXiv Detail & Related papers (2023-02-27T17:47:56Z) - It is not "accuracy vs. explainability" -- we need both for trustworthy
AI systems [0.0]
We are witnessing the emergence of an AI economy and society where AI technologies are increasingly impacting health care, business, transportation and many aspects of everyday life.
However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges in their adoption.
These recent shortcomings and concerns have been documented in scientific but also in general press such as accidents with self driving cars, biases in healthcare, hiring and face recognition systems for people of color, seemingly correct medical decisions later found to be made due to wrong reasons etc.
arXiv Detail & Related papers (2022-12-16T23:33:10Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Developing Future Human-Centered Smart Cities: Critical Analysis of
Smart City Security, Interpretability, and Ethical Challenges [5.728709119947406]
Key challenges include security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications.
Globally there are calls for technology to be made more humane and human-compatible.
arXiv Detail & Related papers (2020-12-14T18:54:05Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - Explainable Goal-Driven Agents and Robots -- A Comprehensive Review [13.94373363822037]
The paper reviews approaches on explainable goal-driven intelligent agents and robots.
It focuses on techniques for explaining and communicating agents perceptual functions and cognitive reasoning.
It suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
arXiv Detail & Related papers (2020-04-21T01:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.