Representation Learning for Appliance Recognition: A Comparison to
Classical Machine Learning
- URL: http://arxiv.org/abs/2209.03759v1
- Date: Fri, 26 Aug 2022 15:09:20 GMT
- Title: Representation Learning for Appliance Recognition: A Comparison to
Classical Machine Learning
- Authors: Matthias Kahl and Daniel Jorde and Hans-Arno Jacobsen
- Abstract summary: Non-intrusive load monitoring aims at energy consumption and appliance state information retrieval from aggregated consumption measurements.
We show how the NILM processing-chain can be improved, reduced in complexity and alternatively designed with recent deep learning algorithms.
We evaluate all approaches on two large-scale energy consumption datasets with more than 50,000 events of 44 appliances.
- Score: 13.063093054280946
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Non-intrusive load monitoring (NILM) aims at energy consumption and appliance
state information retrieval from aggregated consumption measurements, with the
help of signal processing and machine learning algorithms. Representation
learning with deep neural networks is successfully applied to several related
disciplines. The main advantage of representation learning lies in replacing an
expert-driven, hand-crafted feature extraction with hierarchical learning from
many representations in raw data format. In this paper, we show how the NILM
processing-chain can be improved, reduced in complexity and alternatively
designed with recent deep learning algorithms. On the basis of an event-based
appliance recognition approach, we evaluate seven different classification
models: a classical machine learning approach that is based on a hand-crafted
feature extraction, three different deep neural network architectures for
automated feature extraction on raw waveform data, as well as three baseline
approaches for raw data processing. We evaluate all approaches on two
large-scale energy consumption datasets with more than 50,000 events of 44
appliances. We show that with the use of deep learning, we are able to reach
and surpass the performance of the state-of-the-art classical machine learning
approach for appliance recognition with an F-Score of 0.75 and 0.86 compared to
0.69 and 0.87 of the classical approach.
Related papers
- Understanding learning from EEG data: Combining machine learning and
feature engineering based on hidden Markov models and mixed models [0.0]
Frontal theta oscillations are thought to play an important role in spatial navigation and memory.
EEG datasets are very complex, making changes in the neural signal related to behaviour difficult to interpret.
This paper proposes using hidden Markov and linear mixed effects models to extract features from EEG data.
arXiv Detail & Related papers (2023-11-14T12:24:12Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Deep Feature Learning for Wireless Spectrum Data [0.5809784853115825]
We propose an approach to learning feature representations for wireless transmission clustering in a completely unsupervised manner.
We show that the automatic representation learning is able to extract fine-grained clusters containing the shapes of the wireless transmission bursts.
arXiv Detail & Related papers (2023-08-07T12:27:19Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Learning Deep Representation with Energy-Based Self-Expressiveness for
Subspace Clustering [24.311754971064303]
We propose a new deep subspace clustering framework, motivated by the energy-based models.
Considering the powerful representation ability of the recently popular self-supervised learning, we attempt to leverage self-supervised representation learning to learn the dictionary.
arXiv Detail & Related papers (2021-10-28T11:51:08Z) - Improved Speech Emotion Recognition using Transfer Learning and
Spectrogram Augmentation [56.264157127549446]
Speech emotion recognition (SER) is a challenging task that plays a crucial role in natural human-computer interaction.
One of the main challenges in SER is data scarcity.
We propose a transfer learning strategy combined with spectrogram augmentation.
arXiv Detail & Related papers (2021-08-05T10:39:39Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Unsupervised Multi-Modal Representation Learning for Affective Computing
with Multi-Corpus Wearable Data [16.457778420360537]
We propose an unsupervised framework to reduce the reliance on human supervision.
The proposed framework utilizes two stacked convolutional autoencoders to learn latent representations from wearable electrocardiogram (ECG) and electrodermal activity (EDA) signals.
Our method outperforms current state-of-the-art results that have performed arousal detection on the same datasets.
arXiv Detail & Related papers (2020-08-24T22:01:55Z) - Extracting dispersion curves from ambient noise correlations using deep
learning [1.0237120900821557]
We present a machine-learning approach to classifying the phases of surface wave dispersion curves.
Standard FTAN analysis of surfaces observed on an array of receivers is converted to an image.
We use a convolutional neural network (U-net) architecture with a supervised learning objective and incorporate transfer learning.
arXiv Detail & Related papers (2020-02-05T23:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.