Personalized Model-Based Design of Human Centric AI enabled CPS for Long term usage
- URL: http://arxiv.org/abs/2601.04545v1
- Date: Thu, 08 Jan 2026 03:17:59 GMT
- Title: Personalized Model-Based Design of Human Centric AI enabled CPS for Long term usage
- Authors: Bernard Ngabonziza, Ayan Banerjee, Sandeep K. S. Gupta,
- Abstract summary: Human centric critical systems are increasingly involving artificial intelligence to enable knowledge extraction from sensor collected data.<n>Examples include medical monitoring and control systems, gesture based human computer interaction systems, and autonomous cars.<n>Long term operation of such AI enabled human centric applications can expose them to corner cases for which their operation is may be uncertain.
- Score: 0.9914910610631541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human centric critical systems are increasingly involving artificial intelligence to enable knowledge extraction from sensor collected data. Examples include medical monitoring and control systems, gesture based human computer interaction systems, and autonomous cars. Such systems are intended to operate for a long term potentially for a lifetime in many scenarios such as closed loop blood glucose control for Type 1 diabetics, self-driving cars, and monitoting systems for stroke diagnosis, and rehabilitation. Long term operation of such AI enabled human centric applications can expose them to corner cases for which their operation is may be uncertain. This can be due to many reasons such as inherent flaws in the design, limited resources for testing, inherent computational limitations of the testing methodology, or unknown use cases resulting from human interaction with the system. Such untested corner cases or cases for which the system performance is uncertain can lead to violations in the safety, sustainability, and security requirements of the system. In this paper, we analyze the existing techniques for safety, sustainability, and security analysis of an AI enabled human centric control system and discuss their limitations for testing the system for long term use in practice. We then propose personalized model based solutions for potentially eliminating such limitations.
Related papers
- Detection of Deployment Operational Deviations for Safety and Security of AI-Enabled Human-Centric Cyber Physical Systems [0.9914910610631541]
Human-centric cyber-physical systems have increasingly involved artificial intelligence to enable knowledge extraction from sensor-collected data.<n>Examples include medical monitoring and control systems, as well as autonomous cars.<n>This paper will discuss operational deviations that can lead these systems to operate in unknown conditions.
arXiv Detail & Related papers (2026-01-08T05:23:58Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Brain-inspired Computational Intelligence via Predictive Coding [73.42407863671565]
Predictive coding (PC) has shown promising properties that make it potentially valuable for the machine learning community.<n>PC-like algorithms are starting to be present in multiple sub-fields of machine learning and AI at large.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Human Uncertainty in Concept-Based AI Systems [37.82747673914624]
We study human uncertainty in the context of concept-based AI systems.
We show that training with uncertain concept labels may help mitigate weaknesses in concept-based systems.
arXiv Detail & Related papers (2023-03-22T19:17:57Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Robustness testing of AI systems: A case study for traffic sign
recognition [13.395753930904108]
This paper presents how the robustness of AI systems can be practically examined and which methods and metrics can be used to do so.
The robustness testing methodology is described and analysed for the example use case of traffic sign recognition in autonomous driving.
arXiv Detail & Related papers (2021-08-13T10:29:09Z) - Towards self-organized control: Using neural cellular automata to
robustly control a cart-pole agent [62.997667081978825]
We use neural cellular automata to control a cart-pole agent.
We trained the model using deep-Q learning, where the states of the output cells were used as the Q-value estimates to be optimized.
arXiv Detail & Related papers (2021-06-29T10:49:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Monitoring and Diagnosability of Perception Systems [21.25149064251918]
We propose a mathematical model for runtime monitoring and fault detection and identification in perception systems.
We demonstrate our monitoring system, dubbed PerSyS, in realistic simulations using the LGSVL self-driving simulator and the Apollo Auto autonomy software stack.
arXiv Detail & Related papers (2020-11-11T23:03:14Z) - Monitoring and Diagnosability of Perception Systems [21.25149064251918]
Perception is a critical component of high-integrity applications of robotics and autonomous systems, such as self-driving cars.
Despite the paramount importance of perception systems, there is no formal approach for system-level monitoring.
We propose a mathematical model for runtime monitoring and fault detection of perception systems.
arXiv Detail & Related papers (2020-05-24T18:09:46Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.