TEFL: Turbo Explainable Federated Learning for 6G Trustworthy Zero-Touch
Network Slicing
- URL: http://arxiv.org/abs/2210.10147v2
- Date: Tue, 25 Jul 2023 12:57:15 GMT
- Title: TEFL: Turbo Explainable Federated Learning for 6G Trustworthy Zero-Touch
Network Slicing
- Authors: Swastika Roy, Hatim Chergui, and Christos Verikoukis
- Abstract summary: Sixth-generation (6G) networks anticipate supporting a massive number of coexisting and heterogeneous slices.
The trustworthiness of the AI black-boxes in real deployment can be achieved by explainable AI (XAI) tools.
Inspired by the turbo principle, this paper presents a novel iterative explainable federated learning (FL) approach.
- Score: 0.4588028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sixth-generation (6G) networks anticipate intelligently supporting a massive
number of coexisting and heterogeneous slices associated with various vertical
use cases. Such a context urges the adoption of artificial intelligence
(AI)-driven zero-touch management and orchestration (MANO) of the end-to-end
(E2E) slices under stringent service level agreements (SLAs). Specifically, the
trustworthiness of the AI black-boxes in real deployment can be achieved by
explainable AI (XAI) tools to build transparency between the interacting actors
in the slicing ecosystem, such as tenants, infrastructure providers and
operators. Inspired by the turbo principle, this paper presents a novel
iterative explainable federated learning (FL) approach where a constrained
resource allocation model and an \emph{explainer} exchange -- in a closed loop
(CL) fashion -- soft attributions of the features as well as inference
predictions to achieve a transparent and SLA-aware zero-touch service
management (ZSM) of 6G network slices at RAN-Edge setup under non-independent
identically distributed (non-IID) datasets. In particular, we quantitatively
validate the faithfulness of the explanations via the so-called
attribution-based \emph{confidence metric} that is included as a constraint in
the run-time FL optimization task. In this respect, Integrated-Gradient (IG) as
well as Input $\times$ Gradient and SHAP are used to generate the attributions
for the turbo explainable FL (TEFL), wherefore simulation results under
different methods confirm its superiority over an unconstrained
Integrated-Gradient \emph{post-hoc} FL baseline.
Related papers
- Federated Contrastive Learning for Personalized Semantic Communication [55.46383524190467]
We design a federated contrastive learning framework aimed at supporting personalized semantic communication.
FedCL enables collaborative training of local semantic encoders across multiple clients and a global semantic decoder owned by the base station.
To tackle the semantic imbalance issue arising from heterogeneous datasets across distributed clients, we employ contrastive learning to train a semantic centroid generator.
arXiv Detail & Related papers (2024-06-13T14:45:35Z) - Non-Federated Multi-Task Split Learning for Heterogeneous Sources [17.47679789733922]
We introduce a new architecture and methodology to perform multi-task learning for heterogeneous data sources efficiently.
We show through theoretical analysis that MTSL can achieve fast convergence by tuning the learning rate of the server and clients.
arXiv Detail & Related papers (2024-05-31T19:27:03Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local
Parameter Sharing [14.938531944702193]
We propose Federated Learning with Local Heterogeneous Sharing (FedLPS)
FedLPS uses transfer learning to facilitate the deployment of multiple tasks on a single device by dividing the local model into a shareable encoder and task-specific encoders.
FedLPS significantly outperforms the state-of-the-art (SOTA) FL frameworks by up to 4.88% and reduces the computational resource consumption by 21.3%.
arXiv Detail & Related papers (2024-02-13T16:30:30Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - Towards Bridging the FL Performance-Explainability Trade-Off: A
Trustworthy 6G RAN Slicing Use-Case [0.5156484100374059]
6G network slicing requires highly performing AI models for efficient resource allocation and explainable decision-making.
This paper presents a novel explanation-guided in-hoc federated learning (FL) approach.
We quantitatively validate the faithfulness of the explanations via the so-called attribution-based confidence metric.
arXiv Detail & Related papers (2023-07-24T15:51:06Z) - Explanation-Guided Fair Federated Learning for Transparent 6G RAN
Slicing [0.5156484100374059]
We design an explanation-guided federated learning (EGFL) scheme to ensure trustworthy predictions.
Specifically, we predict per-slice RAN dropped traffic probability to exemplify the proposed concept.
It has also improved the recall score with more than $25%$ relatively to unconstrained-EGFL.
arXiv Detail & Related papers (2023-07-18T15:50:47Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.