Edge Continual Learning for Dynamic Digital Twins over Wireless Networks
- URL: http://arxiv.org/abs/2204.04795v1
- Date: Sun, 10 Apr 2022 23:25:37 GMT
- Title: Edge Continual Learning for Dynamic Digital Twins over Wireless Networks
- Authors: Omar Hashash, Christina Chaccour, Walid Saad
- Abstract summary: Digital twins (DTs) constitute a critical link between the real-world and the metaverse.
In this paper, a novel edge continual learning framework is proposed to accurately model the evolving affinity between a physical twin and its corresponding cyber twin.
The proposed framework achieves a simultaneously accurate and synchronous CT model that is robust to catastrophic forgetting.
- Score: 68.65520952712914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital twins (DTs) constitute a critical link between the real-world and the
metaverse. To guarantee a robust connection between these two worlds, DTs
should maintain accurate representations of the physical applications, while
preserving synchronization between real and digital entities. In this paper, a
novel edge continual learning framework is proposed to accurately model the
evolving affinity between a physical twin (PT) and its corresponding cyber twin
(CT) while maintaining their utmost synchronization. In particular, a CT is
simulated as a deep neural network (DNN) at the wireless network edge to model
an autonomous vehicle traversing an episodically dynamic environment. As the
vehicular PT updates its driving policy in each episode, the CT is required to
concurrently adapt its DNN model to the PT, which gives rise to a
de-synchronization gap. Considering the history-aware nature of DTs, the model
update process is posed a dual objective optimization problem whose goal is to
jointly minimize the loss function over all encountered episodes and the
corresponding de-synchronization time. As the de-synchronization time continues
to increase over sequential episodes, an elastic weight consolidation (EWC)
technique that regularizes the DT history is proposed to limit
de-synchronization time. Furthermore, to address the plasticity-stability
tradeoff accompanying the progressive growth of the EWC regularization terms, a
modified EWC method that considers fair execution between the historical
episodes of the DTs is adopted. Ultimately, the proposed framework achieves a
simultaneously accurate and synchronous CT model that is robust to catastrophic
forgetting. Simulation results show that the proposed solution can achieve an
accuracy of 90 % while guaranteeing a minimal desynchronization time.
Related papers
- Automatically Learning Hybrid Digital Twins of Dynamical Systems [56.69628749813084]
Digital Twins (DTs) simulate the states and temporal dynamics of real-world systems.
DTs often struggle to generalize to unseen conditions in data-scarce settings.
In this paper, we propose an evolutionary algorithm ($textbfHDTwinGen$) to autonomously propose, evaluate, and optimize HDTwins.
arXiv Detail & Related papers (2024-10-31T07:28:22Z) - Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - Oscillatory State-Space Models [61.923849241099184]
We propose Lineary State-Space models (LinOSS) for efficiently learning on long sequences.
A stable discretization, integrated over time using fast associative parallel scans, yields the proposed state-space model.
We show that LinOSS is universal, i.e., it can approximate any continuous and causal operator mapping between time-varying functions.
arXiv Detail & Related papers (2024-10-04T22:00:13Z) - A Differential Smoothness-based Compact-Dynamic Graph Convolutional Network for Spatiotemporal Signal Recovery [9.369246678101048]
This paper proposes a Compact-fold Con Graphal Network (CDCN) fortemporal signal recovery.
Experiments on real-world datasets show that CDCN significantly outperforms the state-of-the-art models fortemporal signal recovery.
arXiv Detail & Related papers (2024-08-06T06:42:53Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Robust Fully-Asynchronous Methods for Distributed Training over General Architecture [11.480605289411807]
Perfect synchronization in distributed machine learning problems is inefficient and even impossible due to the existence of latency, package losses and stragglers.
We propose Fully-Asynchronous Gradient Tracking method (R-FAST), where each device performs local computation and communication at its own without any form of impact.
arXiv Detail & Related papers (2023-07-21T14:36:40Z) - Pose Uncertainty Aware Movement Synchrony Estimation via
Spatial-Temporal Graph Transformer [7.053333608725945]
Movement synchrony reflects the coordination of body movements between interacting dyads.
This paper proposes a skeleton-based graph transformer for movement synchrony estimation.
Our method achieved an overall accuracy of 88.98% and surpassed its counterparts by a wide margin.
arXiv Detail & Related papers (2022-08-01T22:35:32Z) - Automated Dilated Spatio-Temporal Synchronous Graph Modeling for Traffic
Prediction [1.6449390849183363]
We propose an automated dilated-temporal synchronous graph network prediction named Auto-DSTS for traffic prediction.
Specifically, we propose an automated dilated-temporal-temporal graph (Auto-DSTS) module to capture the short-term and long-term-temporal correlations.
Our model can achieve about 10% improvements compared with the state-of-art methods.
arXiv Detail & Related papers (2022-07-22T00:50:39Z) - Continuous-Time Sequential Recommendation with Temporal Graph
Collaborative Transformer [69.0621959845251]
We propose a new framework Temporal Graph Sequential Recommender (TGSRec) upon our defined continuous-time bi-partite graph.
TCT layer can simultaneously capture collaborative signals from both users and items, as well as considering temporal dynamics inside sequential patterns.
Empirical results on five datasets show that TGSRec significantly outperforms other baselines.
arXiv Detail & Related papers (2021-08-14T22:50:53Z) - DS-Sync: Addressing Network Bottlenecks with Divide-and-Shuffle
Synchronization for Distributed DNN Training [15.246142393381488]
We present a novel divide-and-shuffle synchronization (DS-Sync) to realize communication efficiency without sacrificing convergence accuracy for distributed DNN training.
We show that DS-Sync can achieve up to $94%$ improvements on the end-to-end training time with existing solutions while maintaining the same accuracy.
arXiv Detail & Related papers (2020-07-07T09:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.