Adopting the Actor Model for Antifragile Serverless Architectures
- URL: http://arxiv.org/abs/2306.14738v1
- Date: Mon, 26 Jun 2023 14:49:10 GMT
- Title: Adopting the Actor Model for Antifragile Serverless Architectures
- Authors: Marcel Mraz, Hind Bangui, Bruno Rossi, Barbora Buhnova
- Abstract summary: Antifragility is a concept focusing on letting software systems learn and improve over time based on sustained adverse events such as failures.
We propose a new idea for supporting the adoption of supervision strategies in serverless systems to improve the antifragility properties of such systems.
- Score: 2.602613712854636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Antifragility is a novel concept focusing on letting software systems learn
and improve over time based on sustained adverse events such as failures. The
actor model has been proposed to deal with concurrent computation and has
recently been adopted in several serverless platforms. In this paper, we
propose a new idea for supporting the adoption of supervision strategies in
serverless systems to improve the antifragility properties of such systems. We
define a predictive strategy based on the concept of stressors (e.g., injecting
failures), in which actors or a hierarchy of actors can be impacted and
analyzed for systems' improvement. The proposed solution can improve the
system's resiliency in exchange for higher complexity but goes in the direction
of building antifragile systems.
Related papers
- Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Hierarchical Adversarially-Resilient Multi-Agent Reinforcement Learning for Cyber-Physical Systems Security [0.0]
This paper introduces a novel Hierarchical Adversarially-Resilient Multi-Agent Reinforcement Learning framework.<n>The framework incorporates an adversarial training loop designed to simulate and anticipate evolving cyber threats.
arXiv Detail & Related papers (2025-06-12T01:38:25Z) - ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding [71.654781631463]
ReAgent-V is a novel agentic video understanding framework.<n>It integrates efficient frame selection with real-time reward generation during inference.<n>Extensive experiments on 12 datasets demonstrate significant gains in generalization and reasoning.
arXiv Detail & Related papers (2025-06-02T04:23:21Z) - Byzantine-Resilient Over-the-Air Federated Learning under Zero-Trust Architecture [68.83934802584899]
We propose a novel Byzantine-robust FL paradigm for over-the-air transmissions, referred to as federated learning with secure adaptive clustering (FedSAC)
FedSAC aims to protect a portion of the devices from attacks through zero trust architecture (ZTA) based Byzantine identification and adaptive device clustering.
Numerical results substantiate the superiority of the proposed FedSAC over existing methods in terms of both test accuracy and convergence rate.
arXiv Detail & Related papers (2025-03-24T01:56:30Z) - Optimal Security Response to Network Intrusions in IT Systems [0.0]
This thesis tackles the challenges by developing a practical methodology for optimal security response in IT infrastructures.
First, it includes an emulation system that replicates key components of the target infrastructure.
Second, it includes a simulation system where game-theoretic response strategies are optimized through approximation model.
arXiv Detail & Related papers (2025-02-04T18:10:10Z) - Joint Optimization of Prompt Security and System Performance in Edge-Cloud LLM Systems [15.058369477125893]
Large language models (LLMs) have significantly facilitated human life, and prompt engineering has improved the efficiency of these models.
Recent years have witnessed a rise in prompt engineering-empowered attacks, leading to issues such as privacy leaks, increased latency, and system resource wastage.
We jointly consider prompt security, service latency, and system resource optimization in Edge-Cloud LLM (EC-LLM) systems under various prompt attacks.
arXiv Detail & Related papers (2025-01-30T14:33:49Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Preparing for Black Swans: The Antifragility Imperative for Machine Learning [3.8452493072019496]
Operation safely and reliably despite continual distribution shifts is vital for high-stakes machine learning applications.
This paper builds upon the transformative concept of antifragility'' introduced byTaleb, 2014 as a constructive design paradigm.
arXiv Detail & Related papers (2024-05-18T21:32:29Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Collision Avoidance Verification of Multiagent Systems with Learned Policies [9.550601011551024]
This paper presents a backward reachability-based approach for verifying the collision avoidance properties of Multi-Agent Feedback Loops (MA-NFLs)
We account for many uncertainties, making it well aligned with real-world scenarios.
We demonstrate the proposed algorithm can verify collision-free properties of a MA-NFL with agents trained to imitate a collision avoidance algorithm.
arXiv Detail & Related papers (2024-03-05T20:36:26Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - An Intrusion Response System utilizing Deep Q-Networks and System
Partitions [0.415623340386296]
We introduce and develop an IRS software prototype, named irs-partition.
It exploits transfer learning to follow the evolution of non-stationary systems.
arXiv Detail & Related papers (2022-02-16T16:38:20Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - System Component-Level Self-Adaptations for Security via Bayesian Games [0.676855875213031]
Security attacks present unique challenges to self-adaptive system design.
We propose a new self-adaptive framework incorporating Bayesian game and model the defender (i.e., the system) at the granularity of components in system architecture.
arXiv Detail & Related papers (2021-03-12T16:20:59Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.