Transfer Learning as an Essential Tool for Digital Twins in Renewable
Energy Systems
- URL: http://arxiv.org/abs/2203.05026v1
- Date: Wed, 9 Mar 2022 19:59:56 GMT
- Title: Transfer Learning as an Essential Tool for Digital Twins in Renewable
Energy Systems
- Authors: Chandana Priya Nivarthi
- Abstract summary: Digital twins and other intelligent systems need to utilise TL to use the previously gained knowledge and solve new tasks in a more self-reliant way.
This article identifies the critical challenges in power forecasting and anomaly detection in the context of renewable energy systems.
A potential TL framework to meet these challenges is proposed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning (TL), the next frontier in machine learning (ML), has
gained much popularity in recent years, due to the various challenges faced in
ML, like the requirement of vast amounts of training data, expensive and
time-consuming labelling processes for data samples, and long training duration
for models. TL is useful in tackling these problems, as it focuses on
transferring knowledge from previously solved tasks to new tasks. Digital twins
and other intelligent systems need to utilise TL to use the previously gained
knowledge and solve new tasks in a more self-reliant way, and to incrementally
increase their knowledge base. Therefore, in this article, the critical
challenges in power forecasting and anomaly detection in the context of
renewable energy systems are identified, and a potential TL framework to meet
these challenges is proposed. This article also proposes a feature embedding
approach to handle the missing sensors data. The proposed TL methods help to
make a system more autonomous in the context of organic computing.
Related papers
- Towards Sample-Efficiency and Generalization of Transfer and Inverse Reinforcement Learning: A Comprehensive Literature Review [50.67937325077047]
This paper is devoted to a comprehensive review of realizing the sample efficiency and generalization of RL algorithms through transfer and inverse reinforcement learning (T-IRL)
Our findings denote that a majority of recent research works have dealt with the aforementioned challenges by utilizing human-in-the-loop and sim-to-real strategies.
Under the IRL structure, training schemes that require a low number of experience transitions and extension of such frameworks to multi-agent and multi-intention problems have been the priority of researchers in recent years.
arXiv Detail & Related papers (2024-11-15T15:18:57Z) - Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning [79.46570165281084]
We propose a Multi-Stage Knowledge Integration network (MulKI) to emulate the human learning process in distillation methods.
MulKI achieves this through four stages, including Eliciting Ideas, Adding New Ideas, Distinguishing Ideas, and Making Connections.
Our method demonstrates significant improvements in maintaining zero-shot capabilities while supporting continual learning across diverse downstream tasks.
arXiv Detail & Related papers (2024-11-11T07:36:19Z) - Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [79.28821338925947]
Domain-Class Incremental Learning is a realistic but challenging continual learning scenario.
To handle these diverse tasks, pre-trained Vision-Language Models (VLMs) are introduced for their strong generalizability.
This incurs a new problem: the knowledge encoded in the pre-trained VLMs may be disturbed when adapting to new tasks, compromising their inherent zero-shot ability.
Existing methods tackle it by tuning VLMs with knowledge distillation on extra datasets, which demands heavy overhead.
We propose the Distribution-aware Interference-free Knowledge Integration (DIKI) framework, retaining pre-trained knowledge of
arXiv Detail & Related papers (2024-07-07T12:19:37Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Function-space Parameterization of Neural Networks for Sequential Learning [22.095632118886225]
Sequential learning paradigms pose challenges for gradient-based deep learning due to difficulties incorporating new data and retaining prior knowledge.
We introduce a technique that converts neural networks from weight space to function space, through a dual parameterization.
Our experiments demonstrate that we can retain knowledge in continual learning and incorporate new data efficiently.
arXiv Detail & Related papers (2024-03-16T14:00:04Z) - Deep Transfer Learning for Automatic Speech Recognition: Towards Better
Generalization [3.6393183544320236]
Speech recognition has become an important challenge when using deep learning (DL)
It requires large-scale training datasets and high computational and storage resources.
Deep transfer learning (DTL) has been introduced to overcome these issues.
arXiv Detail & Related papers (2023-04-27T21:08:05Z) - Transfer Learning for Future Wireless Networks: A Comprehensive Survey [49.746711269488515]
This article aims to provide a comprehensive survey on applications of Transfer Learning in wireless networks.
We first provide an overview of TL including formal definitions, classification, and various types of TL techniques.
We then discuss diverse TL approaches proposed to address emerging issues in wireless networks.
arXiv Detail & Related papers (2021-02-15T14:19:55Z) - Smart Grid: A Survey of Architectural Elements, Machine Learning and
Deep Learning Applications and Future Directions [0.0]
Big data analytics, machine learning (ML), and deep learning (DL) plays a key role when it comes to the analysis of this massive amount of data and generation of valuable insights.
This paper explores and surveys the Smart grid architectural elements, machine learning, and deep learning-based applications and approaches in the context of the Smart grid.
arXiv Detail & Related papers (2020-10-16T01:40:24Z) - Federated Edge Learning : Design Issues and Challenges [1.916348196696894]
Federated Learning (FL) is a distributed machine learning technique, where each device contributes to the learning model by independently computing the gradient based on its local training data.
implementing FL at the network edge is challenging due to system and data heterogeneity and resources constraints.
This article proposes a general framework for the data-aware scheduling as a guideline for future research directions.
arXiv Detail & Related papers (2020-08-31T19:56:36Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.